Dec 12 18:55:28.935192 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:55:28.935216 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:55:28.935225 kernel: BIOS-provided physical RAM map: Dec 12 18:55:28.935231 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 12 18:55:28.935237 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 12 18:55:28.935243 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:55:28.935252 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 12 18:55:28.935258 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 12 18:55:28.935264 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 18:55:28.935270 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 18:55:28.935276 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:55:28.935282 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:55:28.935288 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 12 18:55:28.935295 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:55:28.935304 kernel: NX (Execute Disable) protection: active Dec 12 18:55:28.935310 kernel: APIC: Static calls initialized Dec 12 18:55:28.935317 kernel: SMBIOS 2.8 present. Dec 12 18:55:28.935323 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 12 18:55:28.935330 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:55:28.935336 kernel: Hypervisor detected: KVM Dec 12 18:55:28.935344 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:55:28.935351 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:55:28.935357 kernel: kvm-clock: using sched offset of 7038874750 cycles Dec 12 18:55:28.935364 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:55:28.935371 kernel: tsc: Detected 2000.000 MHz processor Dec 12 18:55:28.935378 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:55:28.935384 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:55:28.935391 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 12 18:55:28.935398 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:55:28.935405 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:55:28.935413 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:55:28.935420 kernel: Using GB pages for direct mapping Dec 12 18:55:28.935427 kernel: ACPI: Early table checksum verification disabled Dec 12 18:55:28.935433 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 12 18:55:28.935440 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:55:28.935446 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:55:28.935453 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:55:28.937385 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:55:28.937398 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:55:28.937410 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:55:28.937420 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:55:28.937427 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:55:28.937433 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 12 18:55:28.937440 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 12 18:55:28.937449 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:55:28.937456 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 12 18:55:28.937480 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 12 18:55:28.937487 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 12 18:55:28.937493 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 12 18:55:28.937500 kernel: No NUMA configuration found Dec 12 18:55:28.937507 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 12 18:55:28.937514 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Dec 12 18:55:28.937520 kernel: Zone ranges: Dec 12 18:55:28.937530 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:55:28.937537 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:55:28.937544 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:55:28.937550 kernel: Device empty Dec 12 18:55:28.937557 kernel: Movable zone start for each node Dec 12 18:55:28.937564 kernel: Early memory node ranges Dec 12 18:55:28.937570 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:55:28.937577 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 12 18:55:28.937584 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:55:28.937590 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 12 18:55:28.937599 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:55:28.937606 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:55:28.937612 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 12 18:55:28.937619 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:55:28.937626 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:55:28.937632 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:55:28.937639 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:55:28.937646 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:55:28.937653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:55:28.937662 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:55:28.937668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:55:28.937675 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:55:28.937682 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:55:28.937688 kernel: TSC deadline timer available Dec 12 18:55:28.937695 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:55:28.937702 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:55:28.937708 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:55:28.937715 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:55:28.937724 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:55:28.937730 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:55:28.937737 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:55:28.937743 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:55:28.937750 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:55:28.937757 kernel: kvm-guest: setup PV sched yield Dec 12 18:55:28.937764 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 18:55:28.937771 kernel: Booting paravirtualized kernel on KVM Dec 12 18:55:28.937778 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:55:28.937787 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:55:28.937793 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:55:28.937800 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:55:28.937807 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:55:28.937813 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:55:28.937820 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:55:28.937828 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:55:28.937835 kernel: random: crng init done Dec 12 18:55:28.937843 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:55:28.937850 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:55:28.937857 kernel: Fallback order for Node 0: 0 Dec 12 18:55:28.937864 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 12 18:55:28.937871 kernel: Policy zone: Normal Dec 12 18:55:28.937877 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:55:28.937884 kernel: software IO TLB: area num 2. Dec 12 18:55:28.937891 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:55:28.937897 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:55:28.937906 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:55:28.937913 kernel: Dynamic Preempt: voluntary Dec 12 18:55:28.937919 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:55:28.937927 kernel: rcu: RCU event tracing is enabled. Dec 12 18:55:28.937934 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:55:28.937941 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:55:28.937947 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:55:28.937954 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:55:28.937961 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:55:28.937968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:55:28.937977 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:55:28.937991 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:55:28.938000 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:55:28.938007 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:55:28.938014 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:55:28.938021 kernel: Console: colour VGA+ 80x25 Dec 12 18:55:28.938028 kernel: printk: legacy console [tty0] enabled Dec 12 18:55:28.938035 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:55:28.938042 kernel: ACPI: Core revision 20240827 Dec 12 18:55:28.938051 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:55:28.938058 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:55:28.938065 kernel: x2apic enabled Dec 12 18:55:28.938072 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:55:28.938079 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:55:28.938086 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:55:28.938093 kernel: kvm-guest: setup PV IPIs Dec 12 18:55:28.938103 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:55:28.938110 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 12 18:55:28.938117 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Dec 12 18:55:28.938124 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:55:28.938131 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:55:28.938138 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:55:28.938145 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:55:28.938152 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:55:28.938159 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:55:28.938168 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:55:28.938175 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:55:28.938182 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:55:28.938189 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:55:28.938197 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:55:28.938204 kernel: active return thunk: srso_alias_return_thunk Dec 12 18:55:28.938211 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:55:28.938218 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 12 18:55:28.938227 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:55:28.938234 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:55:28.938241 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:55:28.938248 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:55:28.938255 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:55:28.938262 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:55:28.938269 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 12 18:55:28.938276 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 12 18:55:28.938283 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:55:28.938298 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:55:28.938310 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:55:28.938322 kernel: landlock: Up and running. Dec 12 18:55:28.938332 kernel: SELinux: Initializing. Dec 12 18:55:28.938339 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:55:28.938347 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:55:28.938354 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 12 18:55:28.938361 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:55:28.938368 kernel: ... version: 0 Dec 12 18:55:28.938378 kernel: ... bit width: 48 Dec 12 18:55:28.938385 kernel: ... generic registers: 6 Dec 12 18:55:28.938392 kernel: ... value mask: 0000ffffffffffff Dec 12 18:55:28.938399 kernel: ... max period: 00007fffffffffff Dec 12 18:55:28.938405 kernel: ... fixed-purpose events: 0 Dec 12 18:55:28.938412 kernel: ... event mask: 000000000000003f Dec 12 18:55:28.938419 kernel: signal: max sigframe size: 3376 Dec 12 18:55:28.938426 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:55:28.938434 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:55:28.938443 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:55:28.938450 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:55:28.938457 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:55:28.938493 kernel: .... node #0, CPUs: #1 Dec 12 18:55:28.938500 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:55:28.938507 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Dec 12 18:55:28.938515 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 235480K reserved, 0K cma-reserved) Dec 12 18:55:28.938522 kernel: devtmpfs: initialized Dec 12 18:55:28.938529 kernel: x86/mm: Memory block size: 128MB Dec 12 18:55:28.938539 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:55:28.938546 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:55:28.938553 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:55:28.938560 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:55:28.938567 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:55:28.938574 kernel: audit: type=2000 audit(1765565725.696:1): state=initialized audit_enabled=0 res=1 Dec 12 18:55:28.938581 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:55:28.938588 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:55:28.938595 kernel: cpuidle: using governor menu Dec 12 18:55:28.938604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:55:28.938611 kernel: dca service started, version 1.12.1 Dec 12 18:55:28.938618 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 18:55:28.938625 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 18:55:28.938632 kernel: PCI: Using configuration type 1 for base access Dec 12 18:55:28.938639 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:55:28.938646 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:55:28.938653 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:55:28.938660 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:55:28.938669 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:55:28.938676 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:55:28.938683 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:55:28.938690 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:55:28.938697 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:55:28.938704 kernel: ACPI: Interpreter enabled Dec 12 18:55:28.938711 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:55:28.938718 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:55:28.938725 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:55:28.938734 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:55:28.938741 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:55:28.938748 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:55:28.938933 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:55:28.939062 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:55:28.939186 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:55:28.939195 kernel: PCI host bridge to bus 0000:00 Dec 12 18:55:28.939327 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:55:28.939505 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:55:28.939625 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:55:28.939735 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 18:55:28.939845 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 18:55:28.939954 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 12 18:55:28.940063 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:55:28.940207 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:55:28.940343 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:55:28.940503 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 12 18:55:28.940635 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 12 18:55:28.940754 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 12 18:55:28.940874 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:55:28.941008 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:55:28.941135 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 12 18:55:28.941255 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 12 18:55:28.941373 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 12 18:55:28.943206 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:55:28.943342 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 12 18:55:28.943496 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 12 18:55:28.943629 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 12 18:55:28.943751 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 12 18:55:28.943928 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:55:28.944057 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:55:28.944187 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:55:28.944307 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 12 18:55:28.944426 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 12 18:55:28.944582 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:55:28.944705 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 18:55:28.944715 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:55:28.944722 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:55:28.944729 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:55:28.944736 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:55:28.944743 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:55:28.944750 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:55:28.944760 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:55:28.944768 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:55:28.944775 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:55:28.944782 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:55:28.944789 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:55:28.944795 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:55:28.944803 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:55:28.944810 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:55:28.944816 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:55:28.944826 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:55:28.944833 kernel: iommu: Default domain type: Translated Dec 12 18:55:28.944840 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:55:28.944847 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:55:28.944854 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:55:28.944861 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 12 18:55:28.944868 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 12 18:55:28.944986 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:55:28.945108 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:55:28.945227 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:55:28.945236 kernel: vgaarb: loaded Dec 12 18:55:28.945243 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:55:28.945250 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:55:28.945257 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:55:28.945264 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:55:28.945271 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:55:28.945278 kernel: pnp: PnP ACPI init Dec 12 18:55:28.945413 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 18:55:28.945424 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:55:28.945431 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:55:28.945438 kernel: NET: Registered PF_INET protocol family Dec 12 18:55:28.945445 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:55:28.945452 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:55:28.945493 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:55:28.945501 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:55:28.945512 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:55:28.945519 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:55:28.945526 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:55:28.945533 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:55:28.945541 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:55:28.945548 kernel: NET: Registered PF_XDP protocol family Dec 12 18:55:28.945667 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:55:28.945778 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:55:28.945890 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:55:28.946005 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 18:55:28.946115 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 18:55:28.946225 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 12 18:55:28.946234 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:55:28.946241 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:55:28.946248 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 12 18:55:28.946260 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 12 18:55:28.946271 kernel: Initialise system trusted keyrings Dec 12 18:55:28.946288 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:55:28.946300 kernel: Key type asymmetric registered Dec 12 18:55:28.946311 kernel: Asymmetric key parser 'x509' registered Dec 12 18:55:28.946318 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:55:28.946325 kernel: io scheduler mq-deadline registered Dec 12 18:55:28.946332 kernel: io scheduler kyber registered Dec 12 18:55:28.946339 kernel: io scheduler bfq registered Dec 12 18:55:28.946346 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:55:28.946354 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:55:28.946364 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:55:28.946371 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:55:28.946378 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:55:28.946385 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:55:28.946392 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:55:28.946399 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:55:28.946406 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:55:28.946563 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:55:28.946683 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:55:28.946802 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:55:28 UTC (1765565728) Dec 12 18:55:28.946916 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 12 18:55:28.946925 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:55:28.946932 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:55:28.946939 kernel: Segment Routing with IPv6 Dec 12 18:55:28.946947 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:55:28.946954 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:55:28.946961 kernel: Key type dns_resolver registered Dec 12 18:55:28.946971 kernel: IPI shorthand broadcast: enabled Dec 12 18:55:28.946978 kernel: sched_clock: Marking stable (2809004350, 352104540)->(3254763200, -93654310) Dec 12 18:55:28.946985 kernel: registered taskstats version 1 Dec 12 18:55:28.946992 kernel: Loading compiled-in X.509 certificates Dec 12 18:55:28.946999 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:55:28.947006 kernel: Demotion targets for Node 0: null Dec 12 18:55:28.947013 kernel: Key type .fscrypt registered Dec 12 18:55:28.947020 kernel: Key type fscrypt-provisioning registered Dec 12 18:55:28.947027 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:55:28.947037 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:55:28.947044 kernel: ima: No architecture policies found Dec 12 18:55:28.947051 kernel: clk: Disabling unused clocks Dec 12 18:55:28.947058 kernel: Warning: unable to open an initial console. Dec 12 18:55:28.947065 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:55:28.947072 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:55:28.947079 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:55:28.947086 kernel: Run /init as init process Dec 12 18:55:28.947093 kernel: with arguments: Dec 12 18:55:28.947102 kernel: /init Dec 12 18:55:28.947109 kernel: with environment: Dec 12 18:55:28.947131 kernel: HOME=/ Dec 12 18:55:28.947141 kernel: TERM=linux Dec 12 18:55:28.947149 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:55:28.947159 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:55:28.947167 systemd[1]: Detected virtualization kvm. Dec 12 18:55:28.947177 systemd[1]: Detected architecture x86-64. Dec 12 18:55:28.947185 systemd[1]: Running in initrd. Dec 12 18:55:28.947192 systemd[1]: No hostname configured, using default hostname. Dec 12 18:55:28.947200 systemd[1]: Hostname set to . Dec 12 18:55:28.947207 systemd[1]: Initializing machine ID from random generator. Dec 12 18:55:28.947215 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:55:28.947223 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:55:28.947230 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:55:28.947241 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:55:28.947249 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:55:28.947256 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:55:28.947265 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:55:28.947273 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:55:28.947281 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:55:28.947289 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:55:28.947299 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:55:28.947307 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:55:28.947314 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:55:28.947322 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:55:28.947329 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:55:28.947337 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:55:28.947345 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:55:28.947352 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:55:28.947360 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:55:28.947370 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:55:28.947378 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:55:28.947390 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:55:28.947397 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:55:28.947405 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:55:28.947415 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:55:28.947422 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:55:28.947430 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:55:28.947438 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:55:28.947446 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:55:28.947454 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:55:28.947474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:55:28.947482 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:55:28.947492 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:55:28.947522 systemd-journald[187]: Collecting audit messages is disabled. Dec 12 18:55:28.947557 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:55:28.947570 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:55:28.947583 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:55:28.947596 systemd-journald[187]: Journal started Dec 12 18:55:28.947614 systemd-journald[187]: Runtime Journal (/run/log/journal/6c602423ea7e487eaffd99adb3f474c8) is 8M, max 78.2M, 70.2M free. Dec 12 18:55:28.914489 systemd-modules-load[188]: Inserted module 'overlay' Dec 12 18:55:28.980041 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:55:28.980079 kernel: Bridge firewalling registered Dec 12 18:55:28.973403 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 12 18:55:29.064952 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:55:29.066004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:55:29.067489 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:55:29.071670 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:55:29.075580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:55:29.079572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:55:29.082994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:55:29.097715 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:55:29.099721 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:55:29.106146 systemd-tmpfiles[204]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:55:29.110870 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:55:29.113097 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:55:29.116219 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:55:29.119601 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:55:29.138592 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:55:29.162874 systemd-resolved[225]: Positive Trust Anchors: Dec 12 18:55:29.162888 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:55:29.162914 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:55:29.167198 systemd-resolved[225]: Defaulting to hostname 'linux'. Dec 12 18:55:29.170744 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:55:29.171932 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:55:29.241505 kernel: SCSI subsystem initialized Dec 12 18:55:29.250569 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:55:29.261505 kernel: iscsi: registered transport (tcp) Dec 12 18:55:29.283525 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:55:29.283603 kernel: QLogic iSCSI HBA Driver Dec 12 18:55:29.309575 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:55:29.324162 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:55:29.327174 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:55:29.387257 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:55:29.391015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:55:29.447494 kernel: raid6: avx2x4 gen() 24922 MB/s Dec 12 18:55:29.465495 kernel: raid6: avx2x2 gen() 23003 MB/s Dec 12 18:55:29.483738 kernel: raid6: avx2x1 gen() 14409 MB/s Dec 12 18:55:29.483758 kernel: raid6: using algorithm avx2x4 gen() 24922 MB/s Dec 12 18:55:29.504822 kernel: raid6: .... xor() 3325 MB/s, rmw enabled Dec 12 18:55:29.504848 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:55:29.528495 kernel: xor: automatically using best checksumming function avx Dec 12 18:55:29.675516 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:55:29.684974 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:55:29.688232 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:55:29.713539 systemd-udevd[435]: Using default interface naming scheme 'v255'. Dec 12 18:55:29.719304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:55:29.724074 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:55:29.747566 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Dec 12 18:55:29.783549 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:55:29.786634 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:55:29.860117 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:55:29.863582 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:55:29.936689 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:55:29.945482 kernel: AES CTR mode by8 optimization enabled Dec 12 18:55:29.985532 kernel: libata version 3.00 loaded. Dec 12 18:55:30.118848 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:55:30.119025 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:55:30.135187 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:55:30.135416 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:55:30.135430 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:55:30.135604 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:55:30.135746 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:55:30.135887 kernel: scsi host0: ahci Dec 12 18:55:30.121714 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:55:30.136743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:55:30.138152 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:55:30.149483 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 12 18:55:30.186491 kernel: scsi host2: Virtio SCSI HBA Dec 12 18:55:30.190412 kernel: scsi 2:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 12 18:55:30.192496 kernel: scsi host1: ahci Dec 12 18:55:30.197529 kernel: scsi host3: ahci Dec 12 18:55:30.201528 kernel: scsi host4: ahci Dec 12 18:55:30.204516 kernel: scsi host5: ahci Dec 12 18:55:30.204705 kernel: scsi host6: ahci Dec 12 18:55:30.210749 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Dec 12 18:55:30.210781 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Dec 12 18:55:30.214696 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Dec 12 18:55:30.214727 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Dec 12 18:55:30.214749 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Dec 12 18:55:30.218519 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:55:30.218551 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Dec 12 18:55:30.235535 kernel: sd 2:0:0:0: Power-on or device reset occurred Dec 12 18:55:30.235812 kernel: sd 2:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 12 18:55:30.235971 kernel: sd 2:0:0:0: [sda] Write Protect is off Dec 12 18:55:30.236121 kernel: sd 2:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 12 18:55:30.236270 kernel: sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 12 18:55:30.241483 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:55:30.241509 kernel: GPT:9289727 != 167739391 Dec 12 18:55:30.241521 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:55:30.241531 kernel: GPT:9289727 != 167739391 Dec 12 18:55:30.241540 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:55:30.241550 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:55:30.242481 kernel: sd 2:0:0:0: [sda] Attached SCSI disk Dec 12 18:55:30.364554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:55:30.537501 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:55:30.546470 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:55:30.546497 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 18:55:30.550128 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:55:30.550482 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:55:30.555953 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:55:30.608707 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 12 18:55:30.624795 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 12 18:55:30.626866 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:55:30.635483 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 12 18:55:30.636292 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 12 18:55:30.647855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:55:30.649758 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:55:30.650769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:55:30.652693 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:55:30.656872 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:55:30.660655 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:55:30.679293 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:55:30.680331 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:55:30.680787 disk-uuid[621]: Primary Header is updated. Dec 12 18:55:30.680787 disk-uuid[621]: Secondary Entries is updated. Dec 12 18:55:30.680787 disk-uuid[621]: Secondary Header is updated. Dec 12 18:55:31.702732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:55:31.703276 disk-uuid[628]: The operation has completed successfully. Dec 12 18:55:31.758964 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:55:31.759090 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:55:31.791250 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:55:31.802544 sh[643]: Success Dec 12 18:55:31.822711 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:55:31.822785 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:55:31.822800 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:55:31.836644 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:55:31.877494 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:55:31.882535 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:55:31.891642 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:55:31.903502 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (655) Dec 12 18:55:31.907828 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:55:31.907884 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:55:31.919927 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:55:31.919956 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:55:31.919967 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:55:31.924038 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:55:31.925299 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:55:31.926381 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:55:31.927195 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:55:31.931537 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:55:31.955510 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (678) Dec 12 18:55:31.960081 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:55:31.960121 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:55:31.970628 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:55:31.970680 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:55:31.970693 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:55:31.979606 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:55:31.981445 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:55:31.984606 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:55:32.092246 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:55:32.098380 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:55:32.117836 ignition[743]: Ignition 2.22.0 Dec 12 18:55:32.118723 ignition[743]: Stage: fetch-offline Dec 12 18:55:32.118763 ignition[743]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:55:32.118774 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:55:32.118860 ignition[743]: parsed url from cmdline: "" Dec 12 18:55:32.118865 ignition[743]: no config URL provided Dec 12 18:55:32.118870 ignition[743]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:55:32.124322 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:55:32.118878 ignition[743]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:55:32.118884 ignition[743]: failed to fetch config: resource requires networking Dec 12 18:55:32.119024 ignition[743]: Ignition finished successfully Dec 12 18:55:32.138722 systemd-networkd[828]: lo: Link UP Dec 12 18:55:32.138736 systemd-networkd[828]: lo: Gained carrier Dec 12 18:55:32.140387 systemd-networkd[828]: Enumeration completed Dec 12 18:55:32.140515 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:55:32.141333 systemd-networkd[828]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:55:32.141339 systemd-networkd[828]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:55:32.142853 systemd[1]: Reached target network.target - Network. Dec 12 18:55:32.143322 systemd-networkd[828]: eth0: Link UP Dec 12 18:55:32.143529 systemd-networkd[828]: eth0: Gained carrier Dec 12 18:55:32.143539 systemd-networkd[828]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:55:32.147589 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:55:32.178532 ignition[832]: Ignition 2.22.0 Dec 12 18:55:32.179486 ignition[832]: Stage: fetch Dec 12 18:55:32.179629 ignition[832]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:55:32.179641 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:55:32.179736 ignition[832]: parsed url from cmdline: "" Dec 12 18:55:32.179740 ignition[832]: no config URL provided Dec 12 18:55:32.179745 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:55:32.179755 ignition[832]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:55:32.179789 ignition[832]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 12 18:55:32.180007 ignition[832]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:55:32.380252 ignition[832]: PUT http://169.254.169.254/v1/token: attempt #2 Dec 12 18:55:32.380512 ignition[832]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:55:32.780869 ignition[832]: PUT http://169.254.169.254/v1/token: attempt #3 Dec 12 18:55:32.781190 ignition[832]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:55:32.887528 systemd-networkd[828]: eth0: DHCPv4 address 172.237.134.203/24, gateway 172.237.134.1 acquired from 23.213.15.219 Dec 12 18:55:33.581960 ignition[832]: PUT http://169.254.169.254/v1/token: attempt #4 Dec 12 18:55:33.600644 systemd-networkd[828]: eth0: Gained IPv6LL Dec 12 18:55:33.686754 ignition[832]: PUT result: OK Dec 12 18:55:33.686822 ignition[832]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 12 18:55:33.795363 ignition[832]: GET result: OK Dec 12 18:55:33.795509 ignition[832]: parsing config with SHA512: 874308c5c8329e9c1be7e55f10b7514a73ac09bdb86c768b20f0e98baaf71ad22ea977f4ee0bd4bd64fc71d412a4972e20996d0f883a37e942592d52ff431b8f Dec 12 18:55:33.800196 unknown[832]: fetched base config from "system" Dec 12 18:55:33.806046 unknown[832]: fetched base config from "system" Dec 12 18:55:33.806302 ignition[832]: fetch: fetch complete Dec 12 18:55:33.806053 unknown[832]: fetched user config from "akamai" Dec 12 18:55:33.806308 ignition[832]: fetch: fetch passed Dec 12 18:55:33.809129 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:55:33.806354 ignition[832]: Ignition finished successfully Dec 12 18:55:33.824941 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:55:33.856620 ignition[840]: Ignition 2.22.0 Dec 12 18:55:33.856634 ignition[840]: Stage: kargs Dec 12 18:55:33.856758 ignition[840]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:55:33.856768 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:55:33.857441 ignition[840]: kargs: kargs passed Dec 12 18:55:33.859737 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:55:33.857499 ignition[840]: Ignition finished successfully Dec 12 18:55:33.862604 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:55:33.892126 ignition[847]: Ignition 2.22.0 Dec 12 18:55:33.892142 ignition[847]: Stage: disks Dec 12 18:55:33.892253 ignition[847]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:55:33.892263 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:55:33.893018 ignition[847]: disks: disks passed Dec 12 18:55:33.894992 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:55:33.893062 ignition[847]: Ignition finished successfully Dec 12 18:55:33.896652 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:55:33.897705 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:55:33.899128 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:55:33.900522 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:55:33.902093 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:55:33.905565 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:55:33.925718 systemd-fsck[856]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:55:33.931650 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:55:33.934793 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:55:34.048488 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:55:34.049042 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:55:34.050308 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:55:34.052518 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:55:34.055531 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:55:34.057375 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:55:34.058971 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:55:34.059960 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:55:34.069097 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:55:34.070750 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:55:34.082483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (864) Dec 12 18:55:34.082513 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:55:34.086715 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:55:34.091921 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:55:34.091950 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:55:34.096249 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:55:34.098897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:55:34.129540 initrd-setup-root[888]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:55:34.134709 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:55:34.139841 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:55:34.143892 initrd-setup-root[909]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:55:34.238218 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:55:34.240577 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:55:34.242846 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:55:34.261880 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:55:34.266512 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:55:34.279687 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:55:34.298315 ignition[978]: INFO : Ignition 2.22.0 Dec 12 18:55:34.298315 ignition[978]: INFO : Stage: mount Dec 12 18:55:34.300204 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:55:34.300204 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:55:34.300204 ignition[978]: INFO : mount: mount passed Dec 12 18:55:34.300204 ignition[978]: INFO : Ignition finished successfully Dec 12 18:55:34.301362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:55:34.303741 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:55:35.050753 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:55:35.074487 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (988) Dec 12 18:55:35.078902 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:55:35.078930 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:55:35.088271 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:55:35.088300 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:55:35.088321 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:55:35.092618 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:55:35.122713 ignition[1004]: INFO : Ignition 2.22.0 Dec 12 18:55:35.122713 ignition[1004]: INFO : Stage: files Dec 12 18:55:35.124684 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:55:35.124684 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:55:35.124684 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:55:35.127927 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:55:35.127927 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:55:35.130356 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:55:35.131508 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:55:35.132931 unknown[1004]: wrote ssh authorized keys file for user: core Dec 12 18:55:35.133955 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:55:35.134998 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:55:35.134998 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:55:35.259685 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:55:35.495186 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:55:35.495186 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:55:35.497964 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:55:35.497964 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:55:35.497964 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:55:35.497964 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:55:35.497964 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:55:35.497964 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:55:35.497964 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:55:35.497964 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:55:35.497964 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:55:35.497964 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:55:35.531362 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:55:35.531362 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:55:35.531362 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 12 18:55:36.019436 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:55:36.297175 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 12 18:55:36.297175 ignition[1004]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:55:36.300086 ignition[1004]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:55:36.300086 ignition[1004]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:55:36.300086 ignition[1004]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:55:36.300086 ignition[1004]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 18:55:36.300086 ignition[1004]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:55:36.300086 ignition[1004]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:55:36.300086 ignition[1004]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 18:55:36.300086 ignition[1004]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:55:36.300086 ignition[1004]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:55:36.300086 ignition[1004]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:55:36.316655 ignition[1004]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:55:36.316655 ignition[1004]: INFO : files: files passed Dec 12 18:55:36.316655 ignition[1004]: INFO : Ignition finished successfully Dec 12 18:55:36.305304 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:55:36.310641 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:55:36.316632 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:55:36.325942 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:55:36.326074 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:55:36.335384 initrd-setup-root-after-ignition[1035]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:55:36.336996 initrd-setup-root-after-ignition[1035]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:55:36.338419 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:55:36.340746 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:55:36.343914 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:55:36.345505 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:55:36.414916 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:55:36.415058 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:55:36.416867 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:55:36.418289 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:55:36.419954 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:55:36.420808 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:55:36.460087 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:55:36.462644 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:55:36.482749 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:55:36.484544 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:55:36.486314 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:55:36.487881 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:55:36.487989 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:55:36.490334 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:55:36.491398 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:55:36.492844 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:55:36.494267 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:55:36.496040 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:55:36.497687 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:55:36.499334 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:55:36.500995 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:55:36.502708 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:55:36.504457 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:55:36.506070 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:55:36.507541 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:55:36.507698 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:55:36.509424 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:55:36.510547 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:55:36.512130 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:55:36.512942 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:55:36.513748 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:55:36.513844 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:55:36.515915 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:55:36.516069 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:55:36.517002 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:55:36.517097 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:55:36.520552 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:55:36.521899 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:55:36.523578 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:55:36.525647 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:55:36.526804 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:55:36.527414 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:55:36.532577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:55:36.532677 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:55:36.542016 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:55:36.542127 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:55:36.559385 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:55:36.577591 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:55:36.577704 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:55:36.581614 ignition[1059]: INFO : Ignition 2.22.0 Dec 12 18:55:36.581614 ignition[1059]: INFO : Stage: umount Dec 12 18:55:36.581614 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:55:36.581614 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:55:36.586125 ignition[1059]: INFO : umount: umount passed Dec 12 18:55:36.586125 ignition[1059]: INFO : Ignition finished successfully Dec 12 18:55:36.585848 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:55:36.586006 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:55:36.587436 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:55:36.587742 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:55:36.589168 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:55:36.589221 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:55:36.590636 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:55:36.590683 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:55:36.592088 systemd[1]: Stopped target network.target - Network. Dec 12 18:55:36.593554 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:55:36.593609 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:55:36.595051 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:55:36.596447 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:55:36.602537 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:55:36.603632 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:55:36.605402 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:55:36.606891 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:55:36.606938 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:55:36.608321 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:55:36.608366 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:55:36.609811 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:55:36.609868 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:55:36.611256 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:55:36.611303 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:55:36.612732 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:55:36.612788 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:55:36.614392 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:55:36.615892 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:55:36.619834 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:55:36.619967 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:55:36.623015 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:55:36.623257 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:55:36.623371 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:55:36.628272 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:55:36.629086 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:55:36.630366 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:55:36.630413 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:55:36.632852 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:55:36.635874 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:55:36.635940 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:55:36.638423 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:55:36.638494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:55:36.640206 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:55:36.640261 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:55:36.642245 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:55:36.642320 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:55:36.643755 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:55:36.646031 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:55:36.646095 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:55:36.657690 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:55:36.657824 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:55:36.667959 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:55:36.668205 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:55:36.670037 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:55:36.670118 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:55:36.671333 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:55:36.671373 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:55:36.673030 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:55:36.673082 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:55:36.675300 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:55:36.675350 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:55:36.676934 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:55:36.676990 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:55:36.680579 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:55:36.684398 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:55:36.684456 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:55:36.686139 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:55:36.686188 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:55:36.687566 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:55:36.687616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:55:36.694080 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:55:36.694151 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:55:36.694202 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:55:36.698499 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:55:36.698616 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:55:36.700814 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:55:36.702665 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:55:36.723750 systemd[1]: Switching root. Dec 12 18:55:36.754211 systemd-journald[187]: Journal stopped Dec 12 18:55:37.960299 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 12 18:55:37.960325 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:55:37.960337 kernel: SELinux: policy capability open_perms=1 Dec 12 18:55:37.960346 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:55:37.960355 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:55:37.960366 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:55:37.960377 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:55:37.960386 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:55:37.960395 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:55:37.960404 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:55:37.960413 kernel: audit: type=1403 audit(1765565736.932:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:55:37.960424 systemd[1]: Successfully loaded SELinux policy in 91.556ms. Dec 12 18:55:37.960436 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.711ms. Dec 12 18:55:37.960448 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:55:37.960473 systemd[1]: Detected virtualization kvm. Dec 12 18:55:37.960484 systemd[1]: Detected architecture x86-64. Dec 12 18:55:37.960497 systemd[1]: Detected first boot. Dec 12 18:55:37.960507 systemd[1]: Initializing machine ID from random generator. Dec 12 18:55:37.960517 zram_generator::config[1105]: No configuration found. Dec 12 18:55:37.960529 kernel: Guest personality initialized and is inactive Dec 12 18:55:37.960539 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:55:37.960548 kernel: Initialized host personality Dec 12 18:55:37.960558 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:55:37.960568 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:55:37.960581 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:55:37.960591 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:55:37.960601 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:55:37.960612 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:55:37.960622 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:55:37.960632 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:55:37.960642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:55:37.960654 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:55:37.960665 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:55:37.960675 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:55:37.960685 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:55:37.960695 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:55:37.960705 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:55:37.960716 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:55:37.960726 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:55:37.960738 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:55:37.960752 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:55:37.960763 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:55:37.960774 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:55:37.960784 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:55:37.960795 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:55:37.960805 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:55:37.960817 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:55:37.960828 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:55:37.960838 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:55:37.960849 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:55:37.960859 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:55:37.960869 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:55:37.960880 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:55:37.960890 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:55:37.960900 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:55:37.960913 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:55:37.960923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:55:37.960934 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:55:37.960944 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:55:37.960956 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:55:37.960966 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:55:37.960977 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:55:37.960987 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:55:37.960998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:55:37.961008 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:55:37.961018 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:55:37.961029 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:55:37.961041 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:55:37.961052 systemd[1]: Reached target machines.target - Containers. Dec 12 18:55:37.961062 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:55:37.961073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:55:37.961083 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:55:37.961093 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:55:37.961103 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:55:37.961113 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:55:37.961124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:55:37.961136 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:55:37.961146 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:55:37.961157 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:55:37.961167 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:55:37.961177 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:55:37.961187 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:55:37.961198 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:55:37.961209 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:55:37.961222 kernel: ACPI: bus type drm_connector registered Dec 12 18:55:37.961232 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:55:37.961242 kernel: fuse: init (API version 7.41) Dec 12 18:55:37.961251 kernel: loop: module loaded Dec 12 18:55:37.961261 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:55:37.961272 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:55:37.961282 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:55:37.961292 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:55:37.961304 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:55:37.961315 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:55:37.961325 systemd[1]: Stopped verity-setup.service. Dec 12 18:55:37.961357 systemd-journald[1187]: Collecting audit messages is disabled. Dec 12 18:55:37.961380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:55:37.961394 systemd-journald[1187]: Journal started Dec 12 18:55:37.961414 systemd-journald[1187]: Runtime Journal (/run/log/journal/35822e05a6de4b02824616e79afa3d66) is 8M, max 78.2M, 70.2M free. Dec 12 18:55:37.572389 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:55:37.592705 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 18:55:37.593280 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:55:37.971502 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:55:37.972887 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:55:37.973774 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:55:37.974648 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:55:37.975514 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:55:37.976378 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:55:37.977303 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:55:37.978390 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:55:37.979712 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:55:37.980803 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:55:37.981065 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:55:37.982224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:55:37.982502 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:55:37.983722 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:55:37.983924 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:55:37.985100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:55:37.985362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:55:37.986577 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:55:37.986846 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:55:37.987903 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:55:37.988155 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:55:37.989347 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:55:37.990594 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:55:37.991718 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:55:37.992905 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:55:38.007189 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:55:38.010542 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:55:38.013609 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:55:38.016409 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:55:38.016443 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:55:38.018146 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:55:38.028582 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:55:38.031423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:55:38.033668 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:55:38.039692 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:55:38.040808 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:55:38.043051 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:55:38.045073 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:55:38.048634 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:55:38.053764 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:55:38.056643 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:55:38.060156 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:55:38.061764 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:55:38.071968 systemd-journald[1187]: Time spent on flushing to /var/log/journal/35822e05a6de4b02824616e79afa3d66 is 83.585ms for 1006 entries. Dec 12 18:55:38.071968 systemd-journald[1187]: System Journal (/var/log/journal/35822e05a6de4b02824616e79afa3d66) is 8M, max 195.6M, 187.6M free. Dec 12 18:55:38.169659 systemd-journald[1187]: Received client request to flush runtime journal. Dec 12 18:55:38.169716 kernel: loop0: detected capacity change from 0 to 219144 Dec 12 18:55:38.169749 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:55:38.104357 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:55:38.105549 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:55:38.109647 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:55:38.146169 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:55:38.170288 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:55:38.177577 kernel: loop1: detected capacity change from 0 to 128560 Dec 12 18:55:38.179607 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:55:38.187825 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:55:38.190292 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:55:38.196583 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:55:38.214517 kernel: loop2: detected capacity change from 0 to 110984 Dec 12 18:55:38.221101 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Dec 12 18:55:38.221117 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Dec 12 18:55:38.227298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:55:38.255498 kernel: loop3: detected capacity change from 0 to 8 Dec 12 18:55:38.275498 kernel: loop4: detected capacity change from 0 to 219144 Dec 12 18:55:38.304498 kernel: loop5: detected capacity change from 0 to 128560 Dec 12 18:55:38.329485 kernel: loop6: detected capacity change from 0 to 110984 Dec 12 18:55:38.358482 kernel: loop7: detected capacity change from 0 to 8 Dec 12 18:55:38.360769 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Dec 12 18:55:38.361419 (sd-merge)[1252]: Merged extensions into '/usr'. Dec 12 18:55:38.370578 systemd[1]: Reload requested from client PID 1228 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:55:38.370601 systemd[1]: Reloading... Dec 12 18:55:38.478539 zram_generator::config[1280]: No configuration found. Dec 12 18:55:38.526425 ldconfig[1223]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:55:38.684536 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:55:38.684935 systemd[1]: Reloading finished in 313 ms. Dec 12 18:55:38.716245 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:55:38.717563 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:55:38.718696 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:55:38.727754 systemd[1]: Starting ensure-sysext.service... Dec 12 18:55:38.730586 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:55:38.733591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:55:38.753924 systemd[1]: Reload requested from client PID 1322 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:55:38.753942 systemd[1]: Reloading... Dec 12 18:55:38.770720 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Dec 12 18:55:38.775896 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:55:38.776359 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:55:38.777215 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:55:38.777703 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:55:38.781401 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:55:38.781725 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Dec 12 18:55:38.781795 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Dec 12 18:55:38.787911 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:55:38.788051 systemd-tmpfiles[1323]: Skipping /boot Dec 12 18:55:38.812808 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:55:38.812930 systemd-tmpfiles[1323]: Skipping /boot Dec 12 18:55:38.856504 zram_generator::config[1352]: No configuration found. Dec 12 18:55:39.098165 systemd[1]: Reloading finished in 343 ms. Dec 12 18:55:39.109362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:55:39.111007 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:55:39.120649 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:55:39.133256 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:55:39.133736 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:55:39.138524 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:55:39.153595 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:55:39.155515 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:55:39.159499 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:55:39.158160 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:55:39.160764 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:55:39.162651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:55:39.164728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:55:39.178367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:55:39.181704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:55:39.182588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:55:39.182682 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:55:39.184785 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:55:39.189701 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:55:39.194734 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:55:39.197505 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:55:39.198517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:55:39.201003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:55:39.201251 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:55:39.211096 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:55:39.211259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:55:39.219711 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:55:39.221630 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:55:39.228890 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:55:39.229354 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:55:39.234145 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:55:39.234724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:55:39.237853 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:55:39.238718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:55:39.238817 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:55:39.238935 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:55:39.254434 systemd[1]: Finished ensure-sysext.service. Dec 12 18:55:39.256017 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:55:39.268687 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:55:39.276958 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:55:39.278547 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:55:39.288376 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:55:39.300850 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:55:39.302326 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:55:39.309782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:55:39.310042 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:55:39.311880 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:55:39.316901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:55:39.317634 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:55:39.318624 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:55:39.321856 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:55:39.326220 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:55:39.326847 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:55:39.333746 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:55:39.358785 augenrules[1487]: No rules Dec 12 18:55:39.363169 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:55:39.364981 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:55:39.368511 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:55:39.378590 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:55:39.398297 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:55:39.459548 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:55:39.466965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:55:39.471214 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:55:39.502824 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:55:39.620824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:55:39.646895 systemd-networkd[1448]: lo: Link UP Dec 12 18:55:39.647183 systemd-networkd[1448]: lo: Gained carrier Dec 12 18:55:39.649014 systemd-networkd[1448]: Enumeration completed Dec 12 18:55:39.649144 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:55:39.650950 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:55:39.651697 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:55:39.652629 systemd-networkd[1448]: eth0: Link UP Dec 12 18:55:39.652662 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:55:39.652901 systemd-networkd[1448]: eth0: Gained carrier Dec 12 18:55:39.652960 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:55:39.658640 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:55:39.664684 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:55:39.666704 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:55:39.671421 systemd-resolved[1450]: Positive Trust Anchors: Dec 12 18:55:39.671699 systemd-resolved[1450]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:55:39.671767 systemd-resolved[1450]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:55:39.675358 systemd-resolved[1450]: Defaulting to hostname 'linux'. Dec 12 18:55:39.679559 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:55:39.680446 systemd[1]: Reached target network.target - Network. Dec 12 18:55:39.681238 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:55:39.682075 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:55:39.682981 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:55:39.683844 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:55:39.684669 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:55:39.685632 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:55:39.686527 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:55:39.687336 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:55:39.688097 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:55:39.688129 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:55:39.688823 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:55:39.690707 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:55:39.692948 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:55:39.695713 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:55:39.696633 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:55:39.697383 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:55:39.700429 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:55:39.701766 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:55:39.703638 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:55:39.704680 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:55:39.706778 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:55:39.707568 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:55:39.708376 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:55:39.708657 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:55:39.709947 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:55:39.713579 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:55:39.726363 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:55:39.728860 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:55:39.731486 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:55:39.736651 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:55:39.738541 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:55:39.740653 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:55:39.746911 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:55:39.777556 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:55:39.781914 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:55:39.784890 coreos-metadata[1520]: Dec 12 18:55:39.784 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:55:39.785719 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:55:39.794789 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:55:39.796289 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:55:39.796750 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:55:39.798117 jq[1523]: false Dec 12 18:55:39.800497 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing passwd entry cache Dec 12 18:55:39.799652 oslogin_cache_refresh[1525]: Refreshing passwd entry cache Dec 12 18:55:39.800642 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:55:39.804645 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:55:39.814420 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:55:39.817362 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:55:39.817629 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:55:39.818315 oslogin_cache_refresh[1525]: Failure getting users, quitting Dec 12 18:55:39.818595 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting users, quitting Dec 12 18:55:39.818595 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:55:39.818595 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing group entry cache Dec 12 18:55:39.818336 oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:55:39.818399 oslogin_cache_refresh[1525]: Refreshing group entry cache Dec 12 18:55:39.821944 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:55:39.822240 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:55:39.824010 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting groups, quitting Dec 12 18:55:39.826501 oslogin_cache_refresh[1525]: Failure getting groups, quitting Dec 12 18:55:39.826592 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:55:39.826623 oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:55:39.839699 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:55:39.839981 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:55:39.842388 jq[1535]: true Dec 12 18:55:39.844634 extend-filesystems[1524]: Found /dev/sda6 Dec 12 18:55:39.872412 extend-filesystems[1524]: Found /dev/sda9 Dec 12 18:55:39.873176 jq[1559]: true Dec 12 18:55:39.874496 update_engine[1534]: I20251212 18:55:39.874184 1534 main.cc:92] Flatcar Update Engine starting Dec 12 18:55:39.879449 extend-filesystems[1524]: Checking size of /dev/sda9 Dec 12 18:55:39.878114 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:55:39.893039 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:55:39.893327 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:55:39.894964 dbus-daemon[1521]: [system] SELinux support is enabled Dec 12 18:55:39.895554 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:55:39.900302 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:55:39.900810 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:55:39.901588 tar[1541]: linux-amd64/LICENSE Dec 12 18:55:39.901776 tar[1541]: linux-amd64/helm Dec 12 18:55:39.902065 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:55:39.902090 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:55:39.911575 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:55:39.913495 update_engine[1534]: I20251212 18:55:39.912436 1534 update_check_scheduler.cc:74] Next update check in 8m43s Dec 12 18:55:39.917930 extend-filesystems[1524]: Resized partition /dev/sda9 Dec 12 18:55:39.919146 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:55:39.924629 extend-filesystems[1574]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:55:39.933527 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Dec 12 18:55:39.954348 systemd-logind[1533]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:55:39.955966 systemd-logind[1533]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:55:39.956249 systemd-logind[1533]: New seat seat0. Dec 12 18:55:39.958043 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:55:40.030780 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:55:40.032345 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:55:40.041126 systemd[1]: Starting sshkeys.service... Dec 12 18:55:40.115824 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:55:40.128274 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:55:40.133541 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Dec 12 18:55:40.157089 extend-filesystems[1574]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 18:55:40.157089 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 12 18:55:40.157089 extend-filesystems[1574]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Dec 12 18:55:40.156951 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:55:40.170602 extend-filesystems[1524]: Resized filesystem in /dev/sda9 Dec 12 18:55:40.161610 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:55:40.241452 containerd[1560]: time="2025-12-12T18:55:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:55:40.243643 containerd[1560]: time="2025-12-12T18:55:40.242984450Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:55:40.257022 coreos-metadata[1596]: Dec 12 18:55:40.256 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:55:40.260314 containerd[1560]: time="2025-12-12T18:55:40.260273530Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.05µs" Dec 12 18:55:40.260314 containerd[1560]: time="2025-12-12T18:55:40.260309590Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:55:40.260369 containerd[1560]: time="2025-12-12T18:55:40.260330950Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:55:40.260719 containerd[1560]: time="2025-12-12T18:55:40.260697050Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:55:40.260754 containerd[1560]: time="2025-12-12T18:55:40.260719790Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:55:40.260754 containerd[1560]: time="2025-12-12T18:55:40.260746710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:55:40.260837 containerd[1560]: time="2025-12-12T18:55:40.260813950Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:55:40.260837 containerd[1560]: time="2025-12-12T18:55:40.260833860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:55:40.261113 containerd[1560]: time="2025-12-12T18:55:40.261082910Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:55:40.261113 containerd[1560]: time="2025-12-12T18:55:40.261106970Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:55:40.261152 containerd[1560]: time="2025-12-12T18:55:40.261122450Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:55:40.261152 containerd[1560]: time="2025-12-12T18:55:40.261133520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:55:40.261251 containerd[1560]: time="2025-12-12T18:55:40.261226870Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:55:40.261509 containerd[1560]: time="2025-12-12T18:55:40.261484250Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:55:40.261551 containerd[1560]: time="2025-12-12T18:55:40.261526350Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:55:40.261551 containerd[1560]: time="2025-12-12T18:55:40.261544860Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:55:40.262651 containerd[1560]: time="2025-12-12T18:55:40.262619450Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:55:40.264940 locksmithd[1570]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:55:40.265973 containerd[1560]: time="2025-12-12T18:55:40.265945610Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:55:40.266047 containerd[1560]: time="2025-12-12T18:55:40.266023580Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:55:40.268773 containerd[1560]: time="2025-12-12T18:55:40.268744550Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:55:40.268932 containerd[1560]: time="2025-12-12T18:55:40.268908320Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:55:40.268955 containerd[1560]: time="2025-12-12T18:55:40.268931520Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:55:40.268955 containerd[1560]: time="2025-12-12T18:55:40.268943960Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:55:40.269002 containerd[1560]: time="2025-12-12T18:55:40.268954110Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:55:40.269093 containerd[1560]: time="2025-12-12T18:55:40.269067180Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:55:40.269168 containerd[1560]: time="2025-12-12T18:55:40.269106400Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:55:40.269168 containerd[1560]: time="2025-12-12T18:55:40.269118230Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:55:40.269168 containerd[1560]: time="2025-12-12T18:55:40.269127780Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:55:40.269168 containerd[1560]: time="2025-12-12T18:55:40.269136810Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:55:40.269168 containerd[1560]: time="2025-12-12T18:55:40.269144370Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:55:40.269168 containerd[1560]: time="2025-12-12T18:55:40.269153810Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269526280Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269618440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269635250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269646040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269655320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269664740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269674020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269863900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269874060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269883070Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269901160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269954710Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.269967410Z" level=info msg="Start snapshots syncer" Dec 12 18:55:40.270486 containerd[1560]: time="2025-12-12T18:55:40.270162590Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:55:40.270909 containerd[1560]: time="2025-12-12T18:55:40.270827170Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:55:40.271002 containerd[1560]: time="2025-12-12T18:55:40.270915720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:55:40.271221 containerd[1560]: time="2025-12-12T18:55:40.271197350Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:55:40.271400 containerd[1560]: time="2025-12-12T18:55:40.271375330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:55:40.271670 containerd[1560]: time="2025-12-12T18:55:40.271646370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:55:40.271670 containerd[1560]: time="2025-12-12T18:55:40.271668260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:55:40.271711 containerd[1560]: time="2025-12-12T18:55:40.271685440Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:55:40.271711 containerd[1560]: time="2025-12-12T18:55:40.271697030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:55:40.271754 containerd[1560]: time="2025-12-12T18:55:40.271725520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:55:40.271754 containerd[1560]: time="2025-12-12T18:55:40.271745900Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:55:40.271789 containerd[1560]: time="2025-12-12T18:55:40.271764160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:55:40.271789 containerd[1560]: time="2025-12-12T18:55:40.271773610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:55:40.271900 containerd[1560]: time="2025-12-12T18:55:40.271877030Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:55:40.271922 containerd[1560]: time="2025-12-12T18:55:40.271914820Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:55:40.272151 containerd[1560]: time="2025-12-12T18:55:40.271927890Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:55:40.272151 containerd[1560]: time="2025-12-12T18:55:40.272145910Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:55:40.272194 containerd[1560]: time="2025-12-12T18:55:40.272157450Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:55:40.272194 containerd[1560]: time="2025-12-12T18:55:40.272165470Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:55:40.272249 containerd[1560]: time="2025-12-12T18:55:40.272176320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:55:40.272291 containerd[1560]: time="2025-12-12T18:55:40.272253680Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:55:40.272310 containerd[1560]: time="2025-12-12T18:55:40.272294570Z" level=info msg="runtime interface created" Dec 12 18:55:40.272310 containerd[1560]: time="2025-12-12T18:55:40.272301150Z" level=info msg="created NRI interface" Dec 12 18:55:40.272310 containerd[1560]: time="2025-12-12T18:55:40.272308890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:55:40.272406 containerd[1560]: time="2025-12-12T18:55:40.272319220Z" level=info msg="Connect containerd service" Dec 12 18:55:40.272982 containerd[1560]: time="2025-12-12T18:55:40.272335380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:55:40.273109 containerd[1560]: time="2025-12-12T18:55:40.273078600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:55:40.383506 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:55:40.385352 systemd-networkd[1448]: eth0: DHCPv4 address 172.237.134.203/24, gateway 172.237.134.1 acquired from 23.213.15.219 Dec 12 18:55:40.385987 dbus-daemon[1521]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1448 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:55:40.389298 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Dec 12 18:55:40.389805 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:55:40.429261 containerd[1560]: time="2025-12-12T18:55:40.429069570Z" level=info msg="Start subscribing containerd event" Dec 12 18:55:40.430260 containerd[1560]: time="2025-12-12T18:55:40.429689540Z" level=info msg="Start recovering state" Dec 12 18:55:40.430260 containerd[1560]: time="2025-12-12T18:55:40.429776400Z" level=info msg="Start event monitor" Dec 12 18:55:40.430260 containerd[1560]: time="2025-12-12T18:55:40.429789090Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:55:40.430260 containerd[1560]: time="2025-12-12T18:55:40.429796120Z" level=info msg="Start streaming server" Dec 12 18:55:40.430260 containerd[1560]: time="2025-12-12T18:55:40.429805600Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:55:40.430260 containerd[1560]: time="2025-12-12T18:55:40.429812110Z" level=info msg="runtime interface starting up..." Dec 12 18:55:40.430260 containerd[1560]: time="2025-12-12T18:55:40.429817540Z" level=info msg="starting plugins..." Dec 12 18:55:40.430260 containerd[1560]: time="2025-12-12T18:55:40.429830710Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:55:40.430578 containerd[1560]: time="2025-12-12T18:55:40.430562060Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:55:40.430888 containerd[1560]: time="2025-12-12T18:55:40.430874000Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:55:40.432441 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:55:40.435821 containerd[1560]: time="2025-12-12T18:55:40.435446910Z" level=info msg="containerd successfully booted in 0.195095s" Dec 12 18:55:40.436758 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:55:40.440993 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:55:40.465383 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:55:40.466582 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:55:40.469501 systemd-timesyncd[1464]: Contacted time server 144.202.0.197:123 (0.flatcar.pool.ntp.org). Dec 12 18:55:40.469965 systemd-timesyncd[1464]: Initial clock synchronization to Fri 2025-12-12 18:55:40.721564 UTC. Dec 12 18:55:40.470819 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:55:40.496167 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:55:40.500191 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:55:40.502635 dbus-daemon[1521]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:55:40.503357 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:55:40.505307 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:55:40.507099 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:55:40.508506 dbus-daemon[1521]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1622 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:55:40.513313 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:55:40.524622 tar[1541]: linux-amd64/README.md Dec 12 18:55:40.541119 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:55:40.589302 polkitd[1640]: Started polkitd version 126 Dec 12 18:55:40.593406 polkitd[1640]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:55:40.593688 polkitd[1640]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:55:40.593731 polkitd[1640]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:55:40.593915 polkitd[1640]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:55:40.593935 polkitd[1640]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:55:40.593971 polkitd[1640]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:55:40.594578 polkitd[1640]: Finished loading, compiling and executing 2 rules Dec 12 18:55:40.594931 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:55:40.595123 dbus-daemon[1521]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:55:40.595385 polkitd[1640]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:55:40.604214 systemd-hostnamed[1622]: Hostname set to <172-237-134-203> (transient) Dec 12 18:55:40.604227 systemd-resolved[1450]: System hostname changed to '172-237-134-203'. Dec 12 18:55:40.794366 coreos-metadata[1520]: Dec 12 18:55:40.794 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:55:40.887115 coreos-metadata[1520]: Dec 12 18:55:40.887 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 12 18:55:41.087351 coreos-metadata[1520]: Dec 12 18:55:41.087 INFO Fetch successful Dec 12 18:55:41.087351 coreos-metadata[1520]: Dec 12 18:55:41.087 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 12 18:55:41.270885 coreos-metadata[1596]: Dec 12 18:55:41.270 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:55:41.356227 coreos-metadata[1520]: Dec 12 18:55:41.356 INFO Fetch successful Dec 12 18:55:41.362404 coreos-metadata[1596]: Dec 12 18:55:41.362 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 12 18:55:41.469523 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:55:41.470879 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:55:41.472621 systemd-networkd[1448]: eth0: Gained IPv6LL Dec 12 18:55:41.475272 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:55:41.476690 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:55:41.479452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:55:41.482427 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:55:41.497908 coreos-metadata[1596]: Dec 12 18:55:41.497 INFO Fetch successful Dec 12 18:55:41.512431 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:55:41.526203 update-ssh-keys[1684]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:55:41.527361 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:55:41.530356 systemd[1]: Finished sshkeys.service. Dec 12 18:55:42.347145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:55:42.348339 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:55:42.350016 systemd[1]: Startup finished in 2.878s (kernel) + 8.252s (initrd) + 5.506s (userspace) = 16.637s. Dec 12 18:55:42.411242 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:55:42.870262 kubelet[1696]: E1212 18:55:42.870206 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:55:42.873545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:55:42.873945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:55:42.874340 systemd[1]: kubelet.service: Consumed 798ms CPU time, 257.2M memory peak. Dec 12 18:55:44.023883 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:55:44.025028 systemd[1]: Started sshd@0-172.237.134.203:22-139.178.68.195:44618.service - OpenSSH per-connection server daemon (139.178.68.195:44618). Dec 12 18:55:44.389561 sshd[1708]: Accepted publickey for core from 139.178.68.195 port 44618 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:55:44.391217 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:55:44.397349 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:55:44.398631 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:55:44.405339 systemd-logind[1533]: New session 1 of user core. Dec 12 18:55:44.422832 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:55:44.426103 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:55:44.437310 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:55:44.439732 systemd-logind[1533]: New session c1 of user core. Dec 12 18:55:44.604121 systemd[1713]: Queued start job for default target default.target. Dec 12 18:55:44.615698 systemd[1713]: Created slice app.slice - User Application Slice. Dec 12 18:55:44.615723 systemd[1713]: Reached target paths.target - Paths. Dec 12 18:55:44.615765 systemd[1713]: Reached target timers.target - Timers. Dec 12 18:55:44.617217 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:55:44.627829 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:55:44.627943 systemd[1713]: Reached target sockets.target - Sockets. Dec 12 18:55:44.627982 systemd[1713]: Reached target basic.target - Basic System. Dec 12 18:55:44.628027 systemd[1713]: Reached target default.target - Main User Target. Dec 12 18:55:44.628060 systemd[1713]: Startup finished in 182ms. Dec 12 18:55:44.628164 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:55:44.638612 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:55:44.901715 systemd[1]: Started sshd@1-172.237.134.203:22-139.178.68.195:44620.service - OpenSSH per-connection server daemon (139.178.68.195:44620). Dec 12 18:55:45.249381 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 44620 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:55:45.251014 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:55:45.256965 systemd-logind[1533]: New session 2 of user core. Dec 12 18:55:45.261633 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:55:45.498765 sshd[1727]: Connection closed by 139.178.68.195 port 44620 Dec 12 18:55:45.499431 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Dec 12 18:55:45.503049 systemd[1]: sshd@1-172.237.134.203:22-139.178.68.195:44620.service: Deactivated successfully. Dec 12 18:55:45.507851 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:55:45.509870 systemd-logind[1533]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:55:45.510934 systemd-logind[1533]: Removed session 2. Dec 12 18:55:45.564818 systemd[1]: Started sshd@2-172.237.134.203:22-139.178.68.195:44632.service - OpenSSH per-connection server daemon (139.178.68.195:44632). Dec 12 18:55:45.908596 sshd[1733]: Accepted publickey for core from 139.178.68.195 port 44632 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:55:45.910217 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:55:45.915569 systemd-logind[1533]: New session 3 of user core. Dec 12 18:55:45.925594 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:55:46.154616 sshd[1736]: Connection closed by 139.178.68.195 port 44632 Dec 12 18:55:46.155417 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Dec 12 18:55:46.160287 systemd[1]: sshd@2-172.237.134.203:22-139.178.68.195:44632.service: Deactivated successfully. Dec 12 18:55:46.162354 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:55:46.163331 systemd-logind[1533]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:55:46.165266 systemd-logind[1533]: Removed session 3. Dec 12 18:55:46.216359 systemd[1]: Started sshd@3-172.237.134.203:22-139.178.68.195:44638.service - OpenSSH per-connection server daemon (139.178.68.195:44638). Dec 12 18:55:46.560089 sshd[1742]: Accepted publickey for core from 139.178.68.195 port 44638 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:55:46.561705 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:55:46.566866 systemd-logind[1533]: New session 4 of user core. Dec 12 18:55:46.571598 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:55:46.810551 sshd[1745]: Connection closed by 139.178.68.195 port 44638 Dec 12 18:55:46.811160 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Dec 12 18:55:46.815092 systemd-logind[1533]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:55:46.815956 systemd[1]: sshd@3-172.237.134.203:22-139.178.68.195:44638.service: Deactivated successfully. Dec 12 18:55:46.817969 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:55:46.819147 systemd-logind[1533]: Removed session 4. Dec 12 18:55:46.872688 systemd[1]: Started sshd@4-172.237.134.203:22-139.178.68.195:44652.service - OpenSSH per-connection server daemon (139.178.68.195:44652). Dec 12 18:55:47.230908 sshd[1751]: Accepted publickey for core from 139.178.68.195 port 44652 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:55:47.232396 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:55:47.236997 systemd-logind[1533]: New session 5 of user core. Dec 12 18:55:47.242604 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:55:47.442083 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:55:47.442407 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:55:47.454411 sudo[1755]: pam_unix(sudo:session): session closed for user root Dec 12 18:55:47.507532 sshd[1754]: Connection closed by 139.178.68.195 port 44652 Dec 12 18:55:47.508199 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Dec 12 18:55:47.511576 systemd[1]: sshd@4-172.237.134.203:22-139.178.68.195:44652.service: Deactivated successfully. Dec 12 18:55:47.513250 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:55:47.515789 systemd-logind[1533]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:55:47.517000 systemd-logind[1533]: Removed session 5. Dec 12 18:55:47.578293 systemd[1]: Started sshd@5-172.237.134.203:22-139.178.68.195:44660.service - OpenSSH per-connection server daemon (139.178.68.195:44660). Dec 12 18:55:47.933838 sshd[1761]: Accepted publickey for core from 139.178.68.195 port 44660 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:55:47.935719 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:55:47.940528 systemd-logind[1533]: New session 6 of user core. Dec 12 18:55:47.947599 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:55:48.138402 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:55:48.138807 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:55:48.143033 sudo[1766]: pam_unix(sudo:session): session closed for user root Dec 12 18:55:48.148533 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:55:48.148839 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:55:48.157536 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:55:48.193588 augenrules[1788]: No rules Dec 12 18:55:48.194252 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:55:48.194514 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:55:48.196820 sudo[1765]: pam_unix(sudo:session): session closed for user root Dec 12 18:55:48.249841 sshd[1764]: Connection closed by 139.178.68.195 port 44660 Dec 12 18:55:48.250333 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Dec 12 18:55:48.254892 systemd-logind[1533]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:55:48.255870 systemd[1]: sshd@5-172.237.134.203:22-139.178.68.195:44660.service: Deactivated successfully. Dec 12 18:55:48.258127 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:55:48.260209 systemd-logind[1533]: Removed session 6. Dec 12 18:55:48.310772 systemd[1]: Started sshd@6-172.237.134.203:22-139.178.68.195:44670.service - OpenSSH per-connection server daemon (139.178.68.195:44670). Dec 12 18:55:48.644190 sshd[1797]: Accepted publickey for core from 139.178.68.195 port 44670 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:55:48.645968 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:55:48.650533 systemd-logind[1533]: New session 7 of user core. Dec 12 18:55:48.656583 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:55:48.841516 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:55:48.841836 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:55:49.138937 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:55:49.155954 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:55:49.385901 dockerd[1820]: time="2025-12-12T18:55:49.385817199Z" level=info msg="Starting up" Dec 12 18:55:49.386922 dockerd[1820]: time="2025-12-12T18:55:49.386888871Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:55:49.400038 dockerd[1820]: time="2025-12-12T18:55:49.399764165Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:55:49.438133 dockerd[1820]: time="2025-12-12T18:55:49.437803472Z" level=info msg="Loading containers: start." Dec 12 18:55:49.452517 kernel: Initializing XFRM netlink socket Dec 12 18:55:49.693903 systemd-networkd[1448]: docker0: Link UP Dec 12 18:55:49.696948 dockerd[1820]: time="2025-12-12T18:55:49.696914130Z" level=info msg="Loading containers: done." Dec 12 18:55:49.710103 dockerd[1820]: time="2025-12-12T18:55:49.710069029Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:55:49.710235 dockerd[1820]: time="2025-12-12T18:55:49.710135225Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:55:49.710235 dockerd[1820]: time="2025-12-12T18:55:49.710214411Z" level=info msg="Initializing buildkit" Dec 12 18:55:49.729208 dockerd[1820]: time="2025-12-12T18:55:49.729179368Z" level=info msg="Completed buildkit initialization" Dec 12 18:55:49.735327 dockerd[1820]: time="2025-12-12T18:55:49.735299693Z" level=info msg="Daemon has completed initialization" Dec 12 18:55:49.735493 dockerd[1820]: time="2025-12-12T18:55:49.735407236Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:55:49.735449 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:55:50.215338 containerd[1560]: time="2025-12-12T18:55:50.215283757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 18:55:50.944035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1997432700.mount: Deactivated successfully. Dec 12 18:55:52.049215 containerd[1560]: time="2025-12-12T18:55:52.049159385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:52.050128 containerd[1560]: time="2025-12-12T18:55:52.050059572Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Dec 12 18:55:52.050717 containerd[1560]: time="2025-12-12T18:55:52.050691306Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:52.053436 containerd[1560]: time="2025-12-12T18:55:52.053391272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:52.054536 containerd[1560]: time="2025-12-12T18:55:52.054379449Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.839049723s" Dec 12 18:55:52.054536 containerd[1560]: time="2025-12-12T18:55:52.054408107Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 12 18:55:52.055382 containerd[1560]: time="2025-12-12T18:55:52.055347449Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 18:55:53.124307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:55:53.127550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:55:53.333588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:55:53.342811 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:55:53.392945 kubelet[2100]: E1212 18:55:53.392830 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:55:53.397106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:55:53.397295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:55:53.398575 systemd[1]: kubelet.service: Consumed 193ms CPU time, 110.9M memory peak. Dec 12 18:55:53.513359 containerd[1560]: time="2025-12-12T18:55:53.513284586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:53.514133 containerd[1560]: time="2025-12-12T18:55:53.514098188Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Dec 12 18:55:53.514674 containerd[1560]: time="2025-12-12T18:55:53.514648342Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:53.516646 containerd[1560]: time="2025-12-12T18:55:53.516624932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:53.517594 containerd[1560]: time="2025-12-12T18:55:53.517480310Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.46200623s" Dec 12 18:55:53.517594 containerd[1560]: time="2025-12-12T18:55:53.517513240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 12 18:55:53.517902 containerd[1560]: time="2025-12-12T18:55:53.517885843Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 18:55:54.649665 containerd[1560]: time="2025-12-12T18:55:54.649613107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:54.650517 containerd[1560]: time="2025-12-12T18:55:54.650393128Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Dec 12 18:55:54.651098 containerd[1560]: time="2025-12-12T18:55:54.651073664Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:54.653078 containerd[1560]: time="2025-12-12T18:55:54.653051767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:54.653963 containerd[1560]: time="2025-12-12T18:55:54.653940162Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.135980081s" Dec 12 18:55:54.654036 containerd[1560]: time="2025-12-12T18:55:54.654022913Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 12 18:55:54.654604 containerd[1560]: time="2025-12-12T18:55:54.654379261Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 18:55:55.812426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3570395902.mount: Deactivated successfully. Dec 12 18:55:56.049326 containerd[1560]: time="2025-12-12T18:55:56.049278030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:56.050165 containerd[1560]: time="2025-12-12T18:55:56.050018004Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Dec 12 18:55:56.050687 containerd[1560]: time="2025-12-12T18:55:56.050656143Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:56.052065 containerd[1560]: time="2025-12-12T18:55:56.052037228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:56.052568 containerd[1560]: time="2025-12-12T18:55:56.052539034Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.398136794s" Dec 12 18:55:56.052616 containerd[1560]: time="2025-12-12T18:55:56.052570077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 12 18:55:56.053362 containerd[1560]: time="2025-12-12T18:55:56.053329223Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 18:55:56.737297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2080008889.mount: Deactivated successfully. Dec 12 18:55:57.469422 containerd[1560]: time="2025-12-12T18:55:57.469371427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:57.470298 containerd[1560]: time="2025-12-12T18:55:57.470220596Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Dec 12 18:55:57.470808 containerd[1560]: time="2025-12-12T18:55:57.470778936Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:57.474553 containerd[1560]: time="2025-12-12T18:55:57.473034279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:57.474553 containerd[1560]: time="2025-12-12T18:55:57.474143913Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.420785309s" Dec 12 18:55:57.474553 containerd[1560]: time="2025-12-12T18:55:57.474178844Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 12 18:55:57.474887 containerd[1560]: time="2025-12-12T18:55:57.474758333Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 18:55:58.066541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702520938.mount: Deactivated successfully. Dec 12 18:55:58.070825 containerd[1560]: time="2025-12-12T18:55:58.070786543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:58.071447 containerd[1560]: time="2025-12-12T18:55:58.071419196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Dec 12 18:55:58.072049 containerd[1560]: time="2025-12-12T18:55:58.071979713Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:58.073342 containerd[1560]: time="2025-12-12T18:55:58.073301605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:55:58.074197 containerd[1560]: time="2025-12-12T18:55:58.073883883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 598.932219ms" Dec 12 18:55:58.074197 containerd[1560]: time="2025-12-12T18:55:58.073908764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 12 18:55:58.074415 containerd[1560]: time="2025-12-12T18:55:58.074395720Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 18:55:58.709030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893594714.mount: Deactivated successfully. Dec 12 18:56:00.572065 containerd[1560]: time="2025-12-12T18:56:00.571986254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:00.573537 containerd[1560]: time="2025-12-12T18:56:00.573037103Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Dec 12 18:56:00.574022 containerd[1560]: time="2025-12-12T18:56:00.573992081Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:00.576511 containerd[1560]: time="2025-12-12T18:56:00.576482063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:00.577523 containerd[1560]: time="2025-12-12T18:56:00.577489031Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.503069488s" Dec 12 18:56:00.577578 containerd[1560]: time="2025-12-12T18:56:00.577523217Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 12 18:56:03.335926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:56:03.336060 systemd[1]: kubelet.service: Consumed 193ms CPU time, 110.9M memory peak. Dec 12 18:56:03.337980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:56:03.364010 systemd[1]: Reload requested from client PID 2256 ('systemctl') (unit session-7.scope)... Dec 12 18:56:03.364029 systemd[1]: Reloading... Dec 12 18:56:03.492246 zram_generator::config[2303]: No configuration found. Dec 12 18:56:03.696300 systemd[1]: Reloading finished in 331 ms. Dec 12 18:56:03.749998 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:56:03.750096 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:56:03.750648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:56:03.750750 systemd[1]: kubelet.service: Consumed 126ms CPU time, 98.2M memory peak. Dec 12 18:56:03.752288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:56:03.919982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:56:03.928886 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:56:03.974365 kubelet[2354]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:56:03.974365 kubelet[2354]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:56:03.974805 kubelet[2354]: I1212 18:56:03.974369 2354 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:56:04.563637 kubelet[2354]: I1212 18:56:04.563602 2354 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 18:56:04.563637 kubelet[2354]: I1212 18:56:04.563628 2354 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:56:04.564351 kubelet[2354]: I1212 18:56:04.564334 2354 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 18:56:04.564382 kubelet[2354]: I1212 18:56:04.564355 2354 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:56:04.564734 kubelet[2354]: I1212 18:56:04.564710 2354 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:56:04.569326 kubelet[2354]: E1212 18:56:04.569295 2354 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.237.134.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.134.203:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:56:04.571314 kubelet[2354]: I1212 18:56:04.571298 2354 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:56:04.573847 kubelet[2354]: I1212 18:56:04.573830 2354 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:56:04.577162 kubelet[2354]: I1212 18:56:04.577149 2354 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 18:56:04.577981 kubelet[2354]: I1212 18:56:04.577948 2354 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:56:04.578108 kubelet[2354]: I1212 18:56:04.577974 2354 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-134-203","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:56:04.578108 kubelet[2354]: I1212 18:56:04.578106 2354 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:56:04.578225 kubelet[2354]: I1212 18:56:04.578114 2354 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 18:56:04.578225 kubelet[2354]: I1212 18:56:04.578197 2354 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 18:56:04.579600 kubelet[2354]: I1212 18:56:04.579582 2354 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:56:04.579762 kubelet[2354]: I1212 18:56:04.579747 2354 kubelet.go:475] "Attempting to sync node with API server" Dec 12 18:56:04.579790 kubelet[2354]: I1212 18:56:04.579772 2354 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:56:04.579810 kubelet[2354]: I1212 18:56:04.579791 2354 kubelet.go:387] "Adding apiserver pod source" Dec 12 18:56:04.579833 kubelet[2354]: I1212 18:56:04.579812 2354 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:56:04.580260 kubelet[2354]: E1212 18:56:04.580225 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.237.134.203:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-134-203&limit=500&resourceVersion=0\": dial tcp 172.237.134.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:56:04.582489 kubelet[2354]: E1212 18:56:04.582420 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.237.134.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.134.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:56:04.582707 kubelet[2354]: I1212 18:56:04.582669 2354 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:56:04.584553 kubelet[2354]: I1212 18:56:04.583042 2354 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:56:04.584553 kubelet[2354]: I1212 18:56:04.583069 2354 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 18:56:04.584553 kubelet[2354]: W1212 18:56:04.583107 2354 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:56:04.587353 kubelet[2354]: I1212 18:56:04.587333 2354 server.go:1262] "Started kubelet" Dec 12 18:56:04.588671 kubelet[2354]: I1212 18:56:04.588341 2354 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:56:04.593585 kubelet[2354]: I1212 18:56:04.593552 2354 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:56:04.595129 kubelet[2354]: I1212 18:56:04.595106 2354 server.go:310] "Adding debug handlers to kubelet server" Dec 12 18:56:04.597254 kubelet[2354]: E1212 18:56:04.595010 2354 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.134.203:6443/api/v1/namespaces/default/events\": dial tcp 172.237.134.203:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-134-203.18808cb561d137ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-134-203,UID:172-237-134-203,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-134-203,},FirstTimestamp:2025-12-12 18:56:04.587304875 +0000 UTC m=+0.653996432,LastTimestamp:2025-12-12 18:56:04.587304875 +0000 UTC m=+0.653996432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-134-203,}" Dec 12 18:56:04.598035 kubelet[2354]: I1212 18:56:04.597983 2354 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:56:04.598132 kubelet[2354]: I1212 18:56:04.598114 2354 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 18:56:04.598426 kubelet[2354]: I1212 18:56:04.598392 2354 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:56:04.599479 kubelet[2354]: I1212 18:56:04.599445 2354 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:56:04.601631 kubelet[2354]: I1212 18:56:04.601616 2354 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 18:56:04.601805 kubelet[2354]: E1212 18:56:04.601789 2354 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-237-134-203\" not found" Dec 12 18:56:04.601880 kubelet[2354]: I1212 18:56:04.601871 2354 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 18:56:04.601962 kubelet[2354]: I1212 18:56:04.601952 2354 reconciler.go:29] "Reconciler: start to sync state" Dec 12 18:56:04.604546 kubelet[2354]: I1212 18:56:04.604526 2354 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:56:04.605085 kubelet[2354]: E1212 18:56:04.605065 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.237.134.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.134.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:56:04.605200 kubelet[2354]: E1212 18:56:04.605180 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.134.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-134-203?timeout=10s\": dial tcp 172.237.134.203:6443: connect: connection refused" interval="200ms" Dec 12 18:56:04.606839 kubelet[2354]: E1212 18:56:04.606823 2354 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:56:04.607404 kubelet[2354]: I1212 18:56:04.607390 2354 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:56:04.607506 kubelet[2354]: I1212 18:56:04.607457 2354 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:56:04.621038 kubelet[2354]: I1212 18:56:04.620995 2354 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 18:56:04.622226 kubelet[2354]: I1212 18:56:04.622198 2354 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 18:56:04.622226 kubelet[2354]: I1212 18:56:04.622221 2354 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 18:56:04.622290 kubelet[2354]: I1212 18:56:04.622243 2354 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 18:56:04.622322 kubelet[2354]: E1212 18:56:04.622285 2354 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:56:04.627132 kubelet[2354]: E1212 18:56:04.626863 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.237.134.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.134.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:56:04.633313 kubelet[2354]: I1212 18:56:04.633294 2354 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:56:04.633313 kubelet[2354]: I1212 18:56:04.633310 2354 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:56:04.633388 kubelet[2354]: I1212 18:56:04.633325 2354 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:56:04.634795 kubelet[2354]: I1212 18:56:04.634773 2354 policy_none.go:49] "None policy: Start" Dec 12 18:56:04.634795 kubelet[2354]: I1212 18:56:04.634793 2354 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 18:56:04.634871 kubelet[2354]: I1212 18:56:04.634804 2354 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 18:56:04.635792 kubelet[2354]: I1212 18:56:04.635777 2354 policy_none.go:47] "Start" Dec 12 18:56:04.640178 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:56:04.657573 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:56:04.660901 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:56:04.679381 kubelet[2354]: E1212 18:56:04.679350 2354 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:56:04.679597 kubelet[2354]: I1212 18:56:04.679573 2354 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:56:04.679643 kubelet[2354]: I1212 18:56:04.679591 2354 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:56:04.680133 kubelet[2354]: I1212 18:56:04.680004 2354 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:56:04.682051 kubelet[2354]: E1212 18:56:04.681927 2354 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:56:04.682123 kubelet[2354]: E1212 18:56:04.682085 2354 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-134-203\" not found" Dec 12 18:56:04.735718 systemd[1]: Created slice kubepods-burstable-pod722271fbc16f9941ca7801422a79d058.slice - libcontainer container kubepods-burstable-pod722271fbc16f9941ca7801422a79d058.slice. Dec 12 18:56:04.746356 kubelet[2354]: E1212 18:56:04.746315 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-203\" not found" node="172-237-134-203" Dec 12 18:56:04.750421 systemd[1]: Created slice kubepods-burstable-poded34628e79df300a8552add002d5ddf2.slice - libcontainer container kubepods-burstable-poded34628e79df300a8552add002d5ddf2.slice. Dec 12 18:56:04.762694 kubelet[2354]: E1212 18:56:04.762665 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-203\" not found" node="172-237-134-203" Dec 12 18:56:04.766077 systemd[1]: Created slice kubepods-burstable-pod3e0a929b0589bf9efd89d703469cebd6.slice - libcontainer container kubepods-burstable-pod3e0a929b0589bf9efd89d703469cebd6.slice. Dec 12 18:56:04.767943 kubelet[2354]: E1212 18:56:04.767925 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-203\" not found" node="172-237-134-203" Dec 12 18:56:04.781478 kubelet[2354]: I1212 18:56:04.781403 2354 kubelet_node_status.go:75] "Attempting to register node" node="172-237-134-203" Dec 12 18:56:04.781758 kubelet[2354]: E1212 18:56:04.781738 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.134.203:6443/api/v1/nodes\": dial tcp 172.237.134.203:6443: connect: connection refused" node="172-237-134-203" Dec 12 18:56:04.803285 kubelet[2354]: I1212 18:56:04.803254 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/722271fbc16f9941ca7801422a79d058-ca-certs\") pod \"kube-apiserver-172-237-134-203\" (UID: \"722271fbc16f9941ca7801422a79d058\") " pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:04.803285 kubelet[2354]: I1212 18:56:04.803288 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/722271fbc16f9941ca7801422a79d058-k8s-certs\") pod \"kube-apiserver-172-237-134-203\" (UID: \"722271fbc16f9941ca7801422a79d058\") " pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:04.803551 kubelet[2354]: I1212 18:56:04.803305 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/722271fbc16f9941ca7801422a79d058-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-134-203\" (UID: \"722271fbc16f9941ca7801422a79d058\") " pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:04.803551 kubelet[2354]: I1212 18:56:04.803322 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed34628e79df300a8552add002d5ddf2-ca-certs\") pod \"kube-controller-manager-172-237-134-203\" (UID: \"ed34628e79df300a8552add002d5ddf2\") " pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:04.803551 kubelet[2354]: I1212 18:56:04.803337 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed34628e79df300a8552add002d5ddf2-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-134-203\" (UID: \"ed34628e79df300a8552add002d5ddf2\") " pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:04.803551 kubelet[2354]: I1212 18:56:04.803351 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e0a929b0589bf9efd89d703469cebd6-kubeconfig\") pod \"kube-scheduler-172-237-134-203\" (UID: \"3e0a929b0589bf9efd89d703469cebd6\") " pod="kube-system/kube-scheduler-172-237-134-203" Dec 12 18:56:04.803551 kubelet[2354]: I1212 18:56:04.803372 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed34628e79df300a8552add002d5ddf2-flexvolume-dir\") pod \"kube-controller-manager-172-237-134-203\" (UID: \"ed34628e79df300a8552add002d5ddf2\") " pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:04.803670 kubelet[2354]: I1212 18:56:04.803386 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed34628e79df300a8552add002d5ddf2-k8s-certs\") pod \"kube-controller-manager-172-237-134-203\" (UID: \"ed34628e79df300a8552add002d5ddf2\") " pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:04.803670 kubelet[2354]: I1212 18:56:04.803401 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed34628e79df300a8552add002d5ddf2-kubeconfig\") pod \"kube-controller-manager-172-237-134-203\" (UID: \"ed34628e79df300a8552add002d5ddf2\") " pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:04.805635 kubelet[2354]: E1212 18:56:04.805607 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.134.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-134-203?timeout=10s\": dial tcp 172.237.134.203:6443: connect: connection refused" interval="400ms" Dec 12 18:56:04.984391 kubelet[2354]: I1212 18:56:04.984295 2354 kubelet_node_status.go:75] "Attempting to register node" node="172-237-134-203" Dec 12 18:56:04.984837 kubelet[2354]: E1212 18:56:04.984767 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.134.203:6443/api/v1/nodes\": dial tcp 172.237.134.203:6443: connect: connection refused" node="172-237-134-203" Dec 12 18:56:05.048691 kubelet[2354]: E1212 18:56:05.048633 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:05.049778 containerd[1560]: time="2025-12-12T18:56:05.049739521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-134-203,Uid:722271fbc16f9941ca7801422a79d058,Namespace:kube-system,Attempt:0,}" Dec 12 18:56:05.065409 kubelet[2354]: E1212 18:56:05.065341 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:05.066041 containerd[1560]: time="2025-12-12T18:56:05.066002612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-134-203,Uid:ed34628e79df300a8552add002d5ddf2,Namespace:kube-system,Attempt:0,}" Dec 12 18:56:05.070059 kubelet[2354]: E1212 18:56:05.070003 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:05.071141 containerd[1560]: time="2025-12-12T18:56:05.070926401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-134-203,Uid:3e0a929b0589bf9efd89d703469cebd6,Namespace:kube-system,Attempt:0,}" Dec 12 18:56:05.206357 kubelet[2354]: E1212 18:56:05.206293 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.134.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-134-203?timeout=10s\": dial tcp 172.237.134.203:6443: connect: connection refused" interval="800ms" Dec 12 18:56:05.387310 kubelet[2354]: I1212 18:56:05.387190 2354 kubelet_node_status.go:75] "Attempting to register node" node="172-237-134-203" Dec 12 18:56:05.387850 kubelet[2354]: E1212 18:56:05.387724 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.134.203:6443/api/v1/nodes\": dial tcp 172.237.134.203:6443: connect: connection refused" node="172-237-134-203" Dec 12 18:56:05.490110 kubelet[2354]: E1212 18:56:05.490045 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.237.134.203:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-134-203&limit=500&resourceVersion=0\": dial tcp 172.237.134.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:56:05.644933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370362631.mount: Deactivated successfully. Dec 12 18:56:05.649216 containerd[1560]: time="2025-12-12T18:56:05.649174246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:56:05.650268 containerd[1560]: time="2025-12-12T18:56:05.650229237Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:56:05.651325 containerd[1560]: time="2025-12-12T18:56:05.651299086Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:56:05.651761 containerd[1560]: time="2025-12-12T18:56:05.651726838Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:56:05.654479 containerd[1560]: time="2025-12-12T18:56:05.652788356Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:56:05.654479 containerd[1560]: time="2025-12-12T18:56:05.654175385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:56:05.654479 containerd[1560]: time="2025-12-12T18:56:05.654203341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:56:05.656864 containerd[1560]: time="2025-12-12T18:56:05.656807819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:56:05.657439 containerd[1560]: time="2025-12-12T18:56:05.657393555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 606.407747ms" Dec 12 18:56:05.658073 containerd[1560]: time="2025-12-12T18:56:05.658041160Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 586.3252ms" Dec 12 18:56:05.661424 containerd[1560]: time="2025-12-12T18:56:05.661371284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 594.453281ms" Dec 12 18:56:05.683053 containerd[1560]: time="2025-12-12T18:56:05.681770117Z" level=info msg="connecting to shim a2a5a34e89789d75a429811ebec5a5db7381b3e3187b630c189d2c041e160191" address="unix:///run/containerd/s/edd11da42499bb04a1c8bcca37c1605a34e3e90f1123eeda7ec238acc97ab831" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:05.699979 containerd[1560]: time="2025-12-12T18:56:05.699944262Z" level=info msg="connecting to shim fc2683bff15071ab4db581e5f794d6abab2493bb8e7504adb6f6ccc5ffd2321d" address="unix:///run/containerd/s/92bdce44af01561259a9ae20b011a6bc3414fb5395cb430478bba4a165292780" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:05.701321 containerd[1560]: time="2025-12-12T18:56:05.701174508Z" level=info msg="connecting to shim 83137eacdfcdd2a7582811c06affeed23fa1f9c49ae41e15b551f254fdf8a45d" address="unix:///run/containerd/s/5474b3e574815d0ebe7a2757f6c21a330d1b1aa49f019e2589c7fa7d5eafb6cb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:05.732608 systemd[1]: Started cri-containerd-83137eacdfcdd2a7582811c06affeed23fa1f9c49ae41e15b551f254fdf8a45d.scope - libcontainer container 83137eacdfcdd2a7582811c06affeed23fa1f9c49ae41e15b551f254fdf8a45d. Dec 12 18:56:05.741674 systemd[1]: Started cri-containerd-fc2683bff15071ab4db581e5f794d6abab2493bb8e7504adb6f6ccc5ffd2321d.scope - libcontainer container fc2683bff15071ab4db581e5f794d6abab2493bb8e7504adb6f6ccc5ffd2321d. Dec 12 18:56:05.745791 systemd[1]: Started cri-containerd-a2a5a34e89789d75a429811ebec5a5db7381b3e3187b630c189d2c041e160191.scope - libcontainer container a2a5a34e89789d75a429811ebec5a5db7381b3e3187b630c189d2c041e160191. Dec 12 18:56:05.811337 containerd[1560]: time="2025-12-12T18:56:05.811294642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-134-203,Uid:ed34628e79df300a8552add002d5ddf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"83137eacdfcdd2a7582811c06affeed23fa1f9c49ae41e15b551f254fdf8a45d\"" Dec 12 18:56:05.812789 kubelet[2354]: E1212 18:56:05.812768 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:05.817882 containerd[1560]: time="2025-12-12T18:56:05.817848093Z" level=info msg="CreateContainer within sandbox \"83137eacdfcdd2a7582811c06affeed23fa1f9c49ae41e15b551f254fdf8a45d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:56:05.827503 containerd[1560]: time="2025-12-12T18:56:05.825307732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-134-203,Uid:3e0a929b0589bf9efd89d703469cebd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc2683bff15071ab4db581e5f794d6abab2493bb8e7504adb6f6ccc5ffd2321d\"" Dec 12 18:56:05.828153 containerd[1560]: time="2025-12-12T18:56:05.828078444Z" level=info msg="Container 7162b7327c6d9076170d460d16df168a73dad8a2bed4d9e2bb8f30483fc146eb: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:05.828839 kubelet[2354]: E1212 18:56:05.828820 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:05.834431 containerd[1560]: time="2025-12-12T18:56:05.834381311Z" level=info msg="CreateContainer within sandbox \"fc2683bff15071ab4db581e5f794d6abab2493bb8e7504adb6f6ccc5ffd2321d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:56:05.839034 containerd[1560]: time="2025-12-12T18:56:05.838920595Z" level=info msg="CreateContainer within sandbox \"83137eacdfcdd2a7582811c06affeed23fa1f9c49ae41e15b551f254fdf8a45d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7162b7327c6d9076170d460d16df168a73dad8a2bed4d9e2bb8f30483fc146eb\"" Dec 12 18:56:05.839426 containerd[1560]: time="2025-12-12T18:56:05.839393194Z" level=info msg="StartContainer for \"7162b7327c6d9076170d460d16df168a73dad8a2bed4d9e2bb8f30483fc146eb\"" Dec 12 18:56:05.841129 containerd[1560]: time="2025-12-12T18:56:05.840917510Z" level=info msg="connecting to shim 7162b7327c6d9076170d460d16df168a73dad8a2bed4d9e2bb8f30483fc146eb" address="unix:///run/containerd/s/5474b3e574815d0ebe7a2757f6c21a330d1b1aa49f019e2589c7fa7d5eafb6cb" protocol=ttrpc version=3 Dec 12 18:56:05.845029 containerd[1560]: time="2025-12-12T18:56:05.845011058Z" level=info msg="Container 95c8f6c69b7f5bcd60dc803752df397ea822c33a41a17f21cc0f352eea211746: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:05.852664 containerd[1560]: time="2025-12-12T18:56:05.852631975Z" level=info msg="CreateContainer within sandbox \"fc2683bff15071ab4db581e5f794d6abab2493bb8e7504adb6f6ccc5ffd2321d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"95c8f6c69b7f5bcd60dc803752df397ea822c33a41a17f21cc0f352eea211746\"" Dec 12 18:56:05.855554 containerd[1560]: time="2025-12-12T18:56:05.855521100Z" level=info msg="StartContainer for \"95c8f6c69b7f5bcd60dc803752df397ea822c33a41a17f21cc0f352eea211746\"" Dec 12 18:56:05.857434 containerd[1560]: time="2025-12-12T18:56:05.857365879Z" level=info msg="connecting to shim 95c8f6c69b7f5bcd60dc803752df397ea822c33a41a17f21cc0f352eea211746" address="unix:///run/containerd/s/92bdce44af01561259a9ae20b011a6bc3414fb5395cb430478bba4a165292780" protocol=ttrpc version=3 Dec 12 18:56:05.859906 containerd[1560]: time="2025-12-12T18:56:05.859840690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-134-203,Uid:722271fbc16f9941ca7801422a79d058,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2a5a34e89789d75a429811ebec5a5db7381b3e3187b630c189d2c041e160191\"" Dec 12 18:56:05.861393 kubelet[2354]: E1212 18:56:05.861231 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:05.865534 containerd[1560]: time="2025-12-12T18:56:05.865512504Z" level=info msg="CreateContainer within sandbox \"a2a5a34e89789d75a429811ebec5a5db7381b3e3187b630c189d2c041e160191\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:56:05.875740 systemd[1]: Started cri-containerd-7162b7327c6d9076170d460d16df168a73dad8a2bed4d9e2bb8f30483fc146eb.scope - libcontainer container 7162b7327c6d9076170d460d16df168a73dad8a2bed4d9e2bb8f30483fc146eb. Dec 12 18:56:05.879923 systemd[1]: Started cri-containerd-95c8f6c69b7f5bcd60dc803752df397ea822c33a41a17f21cc0f352eea211746.scope - libcontainer container 95c8f6c69b7f5bcd60dc803752df397ea822c33a41a17f21cc0f352eea211746. Dec 12 18:56:05.886357 containerd[1560]: time="2025-12-12T18:56:05.886301069Z" level=info msg="Container 29b76d1820f600c57110b047a4bf67b5ad87fc3f0d60e7fa674143e3ebacc820: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:05.892155 containerd[1560]: time="2025-12-12T18:56:05.892109219Z" level=info msg="CreateContainer within sandbox \"a2a5a34e89789d75a429811ebec5a5db7381b3e3187b630c189d2c041e160191\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"29b76d1820f600c57110b047a4bf67b5ad87fc3f0d60e7fa674143e3ebacc820\"" Dec 12 18:56:05.893000 containerd[1560]: time="2025-12-12T18:56:05.892984517Z" level=info msg="StartContainer for \"29b76d1820f600c57110b047a4bf67b5ad87fc3f0d60e7fa674143e3ebacc820\"" Dec 12 18:56:05.894429 containerd[1560]: time="2025-12-12T18:56:05.894402216Z" level=info msg="connecting to shim 29b76d1820f600c57110b047a4bf67b5ad87fc3f0d60e7fa674143e3ebacc820" address="unix:///run/containerd/s/edd11da42499bb04a1c8bcca37c1605a34e3e90f1123eeda7ec238acc97ab831" protocol=ttrpc version=3 Dec 12 18:56:05.919595 systemd[1]: Started cri-containerd-29b76d1820f600c57110b047a4bf67b5ad87fc3f0d60e7fa674143e3ebacc820.scope - libcontainer container 29b76d1820f600c57110b047a4bf67b5ad87fc3f0d60e7fa674143e3ebacc820. Dec 12 18:56:05.942524 kubelet[2354]: E1212 18:56:05.942328 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.237.134.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.134.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:56:05.975631 containerd[1560]: time="2025-12-12T18:56:05.975433882Z" level=info msg="StartContainer for \"7162b7327c6d9076170d460d16df168a73dad8a2bed4d9e2bb8f30483fc146eb\" returns successfully" Dec 12 18:56:05.997996 kubelet[2354]: E1212 18:56:05.997939 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.237.134.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.134.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:56:06.007526 kubelet[2354]: E1212 18:56:06.006778 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.134.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-134-203?timeout=10s\": dial tcp 172.237.134.203:6443: connect: connection refused" interval="1.6s" Dec 12 18:56:06.011694 containerd[1560]: time="2025-12-12T18:56:06.010209975Z" level=info msg="StartContainer for \"29b76d1820f600c57110b047a4bf67b5ad87fc3f0d60e7fa674143e3ebacc820\" returns successfully" Dec 12 18:56:06.012886 containerd[1560]: time="2025-12-12T18:56:06.012848452Z" level=info msg="StartContainer for \"95c8f6c69b7f5bcd60dc803752df397ea822c33a41a17f21cc0f352eea211746\" returns successfully" Dec 12 18:56:06.086673 kubelet[2354]: E1212 18:56:06.086629 2354 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.237.134.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.134.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:56:06.191797 kubelet[2354]: I1212 18:56:06.191686 2354 kubelet_node_status.go:75] "Attempting to register node" node="172-237-134-203" Dec 12 18:56:06.646548 kubelet[2354]: E1212 18:56:06.645856 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-203\" not found" node="172-237-134-203" Dec 12 18:56:06.646548 kubelet[2354]: E1212 18:56:06.645972 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:06.651745 kubelet[2354]: E1212 18:56:06.651719 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-203\" not found" node="172-237-134-203" Dec 12 18:56:06.651830 kubelet[2354]: E1212 18:56:06.651808 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:06.653692 kubelet[2354]: E1212 18:56:06.653670 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-203\" not found" node="172-237-134-203" Dec 12 18:56:06.653814 kubelet[2354]: E1212 18:56:06.653774 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:07.657298 kubelet[2354]: E1212 18:56:07.657258 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-203\" not found" node="172-237-134-203" Dec 12 18:56:07.657949 kubelet[2354]: E1212 18:56:07.657376 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:07.657949 kubelet[2354]: E1212 18:56:07.657601 2354 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-203\" not found" node="172-237-134-203" Dec 12 18:56:07.657949 kubelet[2354]: E1212 18:56:07.657676 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:07.729374 kubelet[2354]: I1212 18:56:07.729336 2354 kubelet_node_status.go:78] "Successfully registered node" node="172-237-134-203" Dec 12 18:56:07.729374 kubelet[2354]: E1212 18:56:07.729366 2354 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-237-134-203\": node \"172-237-134-203\" not found" Dec 12 18:56:07.800475 kubelet[2354]: I1212 18:56:07.800421 2354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:07.812343 kubelet[2354]: E1212 18:56:07.812295 2354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-134-203\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:07.812343 kubelet[2354]: I1212 18:56:07.812348 2354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:07.814973 kubelet[2354]: E1212 18:56:07.814940 2354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-134-203\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:07.814973 kubelet[2354]: I1212 18:56:07.814962 2354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-134-203" Dec 12 18:56:07.816664 kubelet[2354]: E1212 18:56:07.816635 2354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-134-203\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-134-203" Dec 12 18:56:08.584681 kubelet[2354]: I1212 18:56:08.584633 2354 apiserver.go:52] "Watching apiserver" Dec 12 18:56:08.602556 kubelet[2354]: I1212 18:56:08.602522 2354 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 18:56:08.656529 kubelet[2354]: I1212 18:56:08.656484 2354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:08.662435 kubelet[2354]: E1212 18:56:08.662404 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:08.891198 kubelet[2354]: I1212 18:56:08.890897 2354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-134-203" Dec 12 18:56:08.895687 kubelet[2354]: E1212 18:56:08.895650 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:09.661753 kubelet[2354]: E1212 18:56:09.661659 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:09.661906 kubelet[2354]: E1212 18:56:09.661794 2354 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:09.878066 systemd[1]: Reload requested from client PID 2628 ('systemctl') (unit session-7.scope)... Dec 12 18:56:09.878088 systemd[1]: Reloading... Dec 12 18:56:09.971502 zram_generator::config[2672]: No configuration found. Dec 12 18:56:10.175909 systemd[1]: Reloading finished in 297 ms. Dec 12 18:56:10.203592 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:56:10.227503 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:56:10.227776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:56:10.227832 systemd[1]: kubelet.service: Consumed 1.057s CPU time, 124.6M memory peak. Dec 12 18:56:10.229428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:56:10.394889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:56:10.398893 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:56:10.438174 kubelet[2723]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:56:10.438174 kubelet[2723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:56:10.438174 kubelet[2723]: I1212 18:56:10.437806 2723 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:56:10.443546 kubelet[2723]: I1212 18:56:10.443522 2723 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 18:56:10.443546 kubelet[2723]: I1212 18:56:10.443540 2723 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:56:10.443620 kubelet[2723]: I1212 18:56:10.443564 2723 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 18:56:10.443620 kubelet[2723]: I1212 18:56:10.443574 2723 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:56:10.443736 kubelet[2723]: I1212 18:56:10.443716 2723 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:56:10.444625 kubelet[2723]: I1212 18:56:10.444606 2723 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:56:10.446242 kubelet[2723]: I1212 18:56:10.446146 2723 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:56:10.453041 kubelet[2723]: I1212 18:56:10.453026 2723 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:56:10.456210 kubelet[2723]: I1212 18:56:10.456196 2723 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 18:56:10.456558 kubelet[2723]: I1212 18:56:10.456534 2723 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:56:10.456898 kubelet[2723]: I1212 18:56:10.456618 2723 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-134-203","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:56:10.457095 kubelet[2723]: I1212 18:56:10.457075 2723 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:56:10.457173 kubelet[2723]: I1212 18:56:10.457141 2723 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 18:56:10.457197 kubelet[2723]: I1212 18:56:10.457165 2723 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 18:56:10.458426 kubelet[2723]: I1212 18:56:10.458399 2723 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:56:10.458616 kubelet[2723]: I1212 18:56:10.458602 2723 kubelet.go:475] "Attempting to sync node with API server" Dec 12 18:56:10.458642 kubelet[2723]: I1212 18:56:10.458622 2723 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:56:10.458642 kubelet[2723]: I1212 18:56:10.458640 2723 kubelet.go:387] "Adding apiserver pod source" Dec 12 18:56:10.458683 kubelet[2723]: I1212 18:56:10.458674 2723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:56:10.462725 kubelet[2723]: I1212 18:56:10.461988 2723 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:56:10.463632 kubelet[2723]: I1212 18:56:10.463102 2723 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:56:10.463879 kubelet[2723]: I1212 18:56:10.463770 2723 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 18:56:10.469684 kubelet[2723]: I1212 18:56:10.469671 2723 server.go:1262] "Started kubelet" Dec 12 18:56:10.471855 kubelet[2723]: I1212 18:56:10.471842 2723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:56:10.478366 kubelet[2723]: E1212 18:56:10.478342 2723 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:56:10.478709 kubelet[2723]: I1212 18:56:10.478682 2723 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:56:10.479604 kubelet[2723]: I1212 18:56:10.479584 2723 server.go:310] "Adding debug handlers to kubelet server" Dec 12 18:56:10.481137 kubelet[2723]: I1212 18:56:10.481043 2723 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 18:56:10.482724 kubelet[2723]: I1212 18:56:10.482706 2723 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 18:56:10.483399 kubelet[2723]: I1212 18:56:10.483103 2723 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:56:10.483577 kubelet[2723]: I1212 18:56:10.483555 2723 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 18:56:10.483768 kubelet[2723]: I1212 18:56:10.483755 2723 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:56:10.485694 kubelet[2723]: I1212 18:56:10.484090 2723 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:56:10.488454 kubelet[2723]: I1212 18:56:10.488434 2723 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 18:56:10.488607 kubelet[2723]: I1212 18:56:10.488584 2723 reconciler.go:29] "Reconciler: start to sync state" Dec 12 18:56:10.490301 kubelet[2723]: I1212 18:56:10.490275 2723 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:56:10.490373 kubelet[2723]: I1212 18:56:10.490350 2723 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:56:10.492878 kubelet[2723]: I1212 18:56:10.492430 2723 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:56:10.493027 kubelet[2723]: I1212 18:56:10.493013 2723 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 18:56:10.493093 kubelet[2723]: I1212 18:56:10.493083 2723 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 18:56:10.493147 kubelet[2723]: I1212 18:56:10.493139 2723 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 18:56:10.493222 kubelet[2723]: E1212 18:56:10.493208 2723 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:56:10.539933 kubelet[2723]: I1212 18:56:10.539915 2723 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:56:10.540618 kubelet[2723]: I1212 18:56:10.540032 2723 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:56:10.540618 kubelet[2723]: I1212 18:56:10.540050 2723 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:56:10.540618 kubelet[2723]: I1212 18:56:10.540145 2723 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:56:10.540618 kubelet[2723]: I1212 18:56:10.540154 2723 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:56:10.540618 kubelet[2723]: I1212 18:56:10.540168 2723 policy_none.go:49] "None policy: Start" Dec 12 18:56:10.540618 kubelet[2723]: I1212 18:56:10.540177 2723 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 18:56:10.540618 kubelet[2723]: I1212 18:56:10.540186 2723 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 18:56:10.540618 kubelet[2723]: I1212 18:56:10.540279 2723 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 18:56:10.540618 kubelet[2723]: I1212 18:56:10.540286 2723 policy_none.go:47] "Start" Dec 12 18:56:10.545104 kubelet[2723]: E1212 18:56:10.545082 2723 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:56:10.545236 kubelet[2723]: I1212 18:56:10.545223 2723 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:56:10.545267 kubelet[2723]: I1212 18:56:10.545236 2723 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:56:10.546421 kubelet[2723]: I1212 18:56:10.546297 2723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:56:10.547209 kubelet[2723]: E1212 18:56:10.547159 2723 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:56:10.594041 kubelet[2723]: I1212 18:56:10.593753 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-134-203" Dec 12 18:56:10.594041 kubelet[2723]: I1212 18:56:10.593909 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:10.594041 kubelet[2723]: I1212 18:56:10.593772 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:10.599588 kubelet[2723]: E1212 18:56:10.599540 2723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-134-203\" already exists" pod="kube-system/kube-scheduler-172-237-134-203" Dec 12 18:56:10.599839 kubelet[2723]: E1212 18:56:10.599824 2723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-134-203\" already exists" pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:10.627322 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:56:10.652273 kubelet[2723]: I1212 18:56:10.652250 2723 kubelet_node_status.go:75] "Attempting to register node" node="172-237-134-203" Dec 12 18:56:10.662484 kubelet[2723]: I1212 18:56:10.662398 2723 kubelet_node_status.go:124] "Node was previously registered" node="172-237-134-203" Dec 12 18:56:10.662630 kubelet[2723]: I1212 18:56:10.662612 2723 kubelet_node_status.go:78] "Successfully registered node" node="172-237-134-203" Dec 12 18:56:10.690015 kubelet[2723]: I1212 18:56:10.689785 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/722271fbc16f9941ca7801422a79d058-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-134-203\" (UID: \"722271fbc16f9941ca7801422a79d058\") " pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:10.690015 kubelet[2723]: I1212 18:56:10.689821 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed34628e79df300a8552add002d5ddf2-ca-certs\") pod \"kube-controller-manager-172-237-134-203\" (UID: \"ed34628e79df300a8552add002d5ddf2\") " pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:10.690015 kubelet[2723]: I1212 18:56:10.689840 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed34628e79df300a8552add002d5ddf2-flexvolume-dir\") pod \"kube-controller-manager-172-237-134-203\" (UID: \"ed34628e79df300a8552add002d5ddf2\") " pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:10.690015 kubelet[2723]: I1212 18:56:10.689856 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed34628e79df300a8552add002d5ddf2-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-134-203\" (UID: \"ed34628e79df300a8552add002d5ddf2\") " pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:10.690015 kubelet[2723]: I1212 18:56:10.689871 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e0a929b0589bf9efd89d703469cebd6-kubeconfig\") pod \"kube-scheduler-172-237-134-203\" (UID: \"3e0a929b0589bf9efd89d703469cebd6\") " pod="kube-system/kube-scheduler-172-237-134-203" Dec 12 18:56:10.690209 kubelet[2723]: I1212 18:56:10.689885 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/722271fbc16f9941ca7801422a79d058-k8s-certs\") pod \"kube-apiserver-172-237-134-203\" (UID: \"722271fbc16f9941ca7801422a79d058\") " pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:10.690209 kubelet[2723]: I1212 18:56:10.689909 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed34628e79df300a8552add002d5ddf2-k8s-certs\") pod \"kube-controller-manager-172-237-134-203\" (UID: \"ed34628e79df300a8552add002d5ddf2\") " pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:10.690209 kubelet[2723]: I1212 18:56:10.689933 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed34628e79df300a8552add002d5ddf2-kubeconfig\") pod \"kube-controller-manager-172-237-134-203\" (UID: \"ed34628e79df300a8552add002d5ddf2\") " pod="kube-system/kube-controller-manager-172-237-134-203" Dec 12 18:56:10.690209 kubelet[2723]: I1212 18:56:10.689951 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/722271fbc16f9941ca7801422a79d058-ca-certs\") pod \"kube-apiserver-172-237-134-203\" (UID: \"722271fbc16f9941ca7801422a79d058\") " pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:10.901473 kubelet[2723]: E1212 18:56:10.900252 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:10.901659 kubelet[2723]: E1212 18:56:10.901589 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:10.901787 kubelet[2723]: E1212 18:56:10.901763 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:11.462028 kubelet[2723]: I1212 18:56:11.461970 2723 apiserver.go:52] "Watching apiserver" Dec 12 18:56:11.490991 kubelet[2723]: I1212 18:56:11.488557 2723 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 18:56:11.531080 kubelet[2723]: I1212 18:56:11.531042 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-134-203" Dec 12 18:56:11.531437 kubelet[2723]: E1212 18:56:11.531404 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:11.532024 kubelet[2723]: I1212 18:56:11.532007 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:11.538675 kubelet[2723]: E1212 18:56:11.538655 2723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-134-203\" already exists" pod="kube-system/kube-apiserver-172-237-134-203" Dec 12 18:56:11.540584 kubelet[2723]: E1212 18:56:11.540536 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:11.541451 kubelet[2723]: E1212 18:56:11.541419 2723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-134-203\" already exists" pod="kube-system/kube-scheduler-172-237-134-203" Dec 12 18:56:11.541582 kubelet[2723]: E1212 18:56:11.541552 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:11.621901 kubelet[2723]: I1212 18:56:11.621736 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-134-203" podStartSLOduration=3.621720433 podStartE2EDuration="3.621720433s" podCreationTimestamp="2025-12-12 18:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:56:11.581691412 +0000 UTC m=+1.178793820" watchObservedRunningTime="2025-12-12 18:56:11.621720433 +0000 UTC m=+1.218822841" Dec 12 18:56:11.631062 kubelet[2723]: I1212 18:56:11.630975 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-134-203" podStartSLOduration=3.630963756 podStartE2EDuration="3.630963756s" podCreationTimestamp="2025-12-12 18:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:56:11.621789964 +0000 UTC m=+1.218892372" watchObservedRunningTime="2025-12-12 18:56:11.630963756 +0000 UTC m=+1.228066164" Dec 12 18:56:11.650109 kubelet[2723]: I1212 18:56:11.650043 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-134-203" podStartSLOduration=1.650029497 podStartE2EDuration="1.650029497s" podCreationTimestamp="2025-12-12 18:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:56:11.640022172 +0000 UTC m=+1.237124580" watchObservedRunningTime="2025-12-12 18:56:11.650029497 +0000 UTC m=+1.247131905" Dec 12 18:56:12.529775 kubelet[2723]: E1212 18:56:12.529549 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:12.529775 kubelet[2723]: E1212 18:56:12.529622 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:12.530576 kubelet[2723]: E1212 18:56:12.530358 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:13.531663 kubelet[2723]: E1212 18:56:13.531636 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:15.824302 kubelet[2723]: E1212 18:56:15.824134 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:16.547596 kubelet[2723]: I1212 18:56:16.547574 2723 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:56:16.548038 containerd[1560]: time="2025-12-12T18:56:16.547990650Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:56:16.549852 kubelet[2723]: I1212 18:56:16.549658 2723 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:56:17.509732 systemd[1]: Created slice kubepods-besteffort-pod06717f73_c67a_4ae7_9f20_829c625f850f.slice - libcontainer container kubepods-besteffort-pod06717f73_c67a_4ae7_9f20_829c625f850f.slice. Dec 12 18:56:17.535038 kubelet[2723]: I1212 18:56:17.534937 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06717f73-c67a-4ae7-9f20-829c625f850f-kube-proxy\") pod \"kube-proxy-hwjz4\" (UID: \"06717f73-c67a-4ae7-9f20-829c625f850f\") " pod="kube-system/kube-proxy-hwjz4" Dec 12 18:56:17.535038 kubelet[2723]: I1212 18:56:17.534983 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06717f73-c67a-4ae7-9f20-829c625f850f-lib-modules\") pod \"kube-proxy-hwjz4\" (UID: \"06717f73-c67a-4ae7-9f20-829c625f850f\") " pod="kube-system/kube-proxy-hwjz4" Dec 12 18:56:17.535038 kubelet[2723]: I1212 18:56:17.535003 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06717f73-c67a-4ae7-9f20-829c625f850f-xtables-lock\") pod \"kube-proxy-hwjz4\" (UID: \"06717f73-c67a-4ae7-9f20-829c625f850f\") " pod="kube-system/kube-proxy-hwjz4" Dec 12 18:56:17.535415 kubelet[2723]: I1212 18:56:17.535072 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmd7l\" (UniqueName: \"kubernetes.io/projected/06717f73-c67a-4ae7-9f20-829c625f850f-kube-api-access-hmd7l\") pod \"kube-proxy-hwjz4\" (UID: \"06717f73-c67a-4ae7-9f20-829c625f850f\") " pod="kube-system/kube-proxy-hwjz4" Dec 12 18:56:17.777354 systemd[1]: Created slice kubepods-besteffort-pod42e23af4_0b0f_4ba2_877d_d026a0b723fd.slice - libcontainer container kubepods-besteffort-pod42e23af4_0b0f_4ba2_877d_d026a0b723fd.slice. Dec 12 18:56:17.819378 kubelet[2723]: E1212 18:56:17.819320 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:17.820673 containerd[1560]: time="2025-12-12T18:56:17.820600524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hwjz4,Uid:06717f73-c67a-4ae7-9f20-829c625f850f,Namespace:kube-system,Attempt:0,}" Dec 12 18:56:17.837501 kubelet[2723]: I1212 18:56:17.837440 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42e23af4-0b0f-4ba2-877d-d026a0b723fd-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-zrdvv\" (UID: \"42e23af4-0b0f-4ba2-877d-d026a0b723fd\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-zrdvv" Dec 12 18:56:17.837501 kubelet[2723]: I1212 18:56:17.837500 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncckm\" (UniqueName: \"kubernetes.io/projected/42e23af4-0b0f-4ba2-877d-d026a0b723fd-kube-api-access-ncckm\") pod \"tigera-operator-65cdcdfd6d-zrdvv\" (UID: \"42e23af4-0b0f-4ba2-877d-d026a0b723fd\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-zrdvv" Dec 12 18:56:17.838376 containerd[1560]: time="2025-12-12T18:56:17.838106609Z" level=info msg="connecting to shim 6ba18844b52516f372d3bbd8b738209b7cfbfb3f7a6a29e9eaa942f3eaad32f8" address="unix:///run/containerd/s/48450232ffc718eb91da99ae21a0219301f78770472b4d68188c2ce41c34c6c4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:17.865597 systemd[1]: Started cri-containerd-6ba18844b52516f372d3bbd8b738209b7cfbfb3f7a6a29e9eaa942f3eaad32f8.scope - libcontainer container 6ba18844b52516f372d3bbd8b738209b7cfbfb3f7a6a29e9eaa942f3eaad32f8. Dec 12 18:56:17.895197 containerd[1560]: time="2025-12-12T18:56:17.895152882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hwjz4,Uid:06717f73-c67a-4ae7-9f20-829c625f850f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ba18844b52516f372d3bbd8b738209b7cfbfb3f7a6a29e9eaa942f3eaad32f8\"" Dec 12 18:56:17.896179 kubelet[2723]: E1212 18:56:17.896086 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:17.899983 containerd[1560]: time="2025-12-12T18:56:17.899480802Z" level=info msg="CreateContainer within sandbox \"6ba18844b52516f372d3bbd8b738209b7cfbfb3f7a6a29e9eaa942f3eaad32f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:56:17.913490 containerd[1560]: time="2025-12-12T18:56:17.912137954Z" level=info msg="Container a4618b280753b0430b4b3f4c6bb4ea77d607a2b7f6f38dbe9d8597a8f36449fb: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:17.916083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2381221027.mount: Deactivated successfully. Dec 12 18:56:17.920484 containerd[1560]: time="2025-12-12T18:56:17.920428439Z" level=info msg="CreateContainer within sandbox \"6ba18844b52516f372d3bbd8b738209b7cfbfb3f7a6a29e9eaa942f3eaad32f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4618b280753b0430b4b3f4c6bb4ea77d607a2b7f6f38dbe9d8597a8f36449fb\"" Dec 12 18:56:17.923208 containerd[1560]: time="2025-12-12T18:56:17.921519248Z" level=info msg="StartContainer for \"a4618b280753b0430b4b3f4c6bb4ea77d607a2b7f6f38dbe9d8597a8f36449fb\"" Dec 12 18:56:17.924005 containerd[1560]: time="2025-12-12T18:56:17.923932889Z" level=info msg="connecting to shim a4618b280753b0430b4b3f4c6bb4ea77d607a2b7f6f38dbe9d8597a8f36449fb" address="unix:///run/containerd/s/48450232ffc718eb91da99ae21a0219301f78770472b4d68188c2ce41c34c6c4" protocol=ttrpc version=3 Dec 12 18:56:17.943921 systemd[1]: Started cri-containerd-a4618b280753b0430b4b3f4c6bb4ea77d607a2b7f6f38dbe9d8597a8f36449fb.scope - libcontainer container a4618b280753b0430b4b3f4c6bb4ea77d607a2b7f6f38dbe9d8597a8f36449fb. Dec 12 18:56:18.020815 containerd[1560]: time="2025-12-12T18:56:18.020712470Z" level=info msg="StartContainer for \"a4618b280753b0430b4b3f4c6bb4ea77d607a2b7f6f38dbe9d8597a8f36449fb\" returns successfully" Dec 12 18:56:18.085067 containerd[1560]: time="2025-12-12T18:56:18.084485491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-zrdvv,Uid:42e23af4-0b0f-4ba2-877d-d026a0b723fd,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:56:18.101132 containerd[1560]: time="2025-12-12T18:56:18.101068803Z" level=info msg="connecting to shim 6f7cbc9c5227bd42aa35174a327ab6aa48ab7dadcc7e1b5b85b15d2f76c30e08" address="unix:///run/containerd/s/a359b2191b89b6e9985ec797c41d4e3170122e8ab64857bca8ff9495456b0ad6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:18.133593 systemd[1]: Started cri-containerd-6f7cbc9c5227bd42aa35174a327ab6aa48ab7dadcc7e1b5b85b15d2f76c30e08.scope - libcontainer container 6f7cbc9c5227bd42aa35174a327ab6aa48ab7dadcc7e1b5b85b15d2f76c30e08. Dec 12 18:56:18.189377 containerd[1560]: time="2025-12-12T18:56:18.189280546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-zrdvv,Uid:42e23af4-0b0f-4ba2-877d-d026a0b723fd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6f7cbc9c5227bd42aa35174a327ab6aa48ab7dadcc7e1b5b85b15d2f76c30e08\"" Dec 12 18:56:18.191451 containerd[1560]: time="2025-12-12T18:56:18.191277563Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:56:18.544926 kubelet[2723]: E1212 18:56:18.543783 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:18.551315 kubelet[2723]: I1212 18:56:18.551030 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hwjz4" podStartSLOduration=1.5510143589999998 podStartE2EDuration="1.551014359s" podCreationTimestamp="2025-12-12 18:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:56:18.550957225 +0000 UTC m=+8.148059643" watchObservedRunningTime="2025-12-12 18:56:18.551014359 +0000 UTC m=+8.148116767" Dec 12 18:56:18.950205 kubelet[2723]: E1212 18:56:18.950077 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:19.546456 kubelet[2723]: E1212 18:56:19.546409 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:19.732718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133592408.mount: Deactivated successfully. Dec 12 18:56:20.235565 containerd[1560]: time="2025-12-12T18:56:20.235523415Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:20.236386 containerd[1560]: time="2025-12-12T18:56:20.236208606Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 18:56:20.236886 containerd[1560]: time="2025-12-12T18:56:20.236861884Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:20.238534 containerd[1560]: time="2025-12-12T18:56:20.238515153Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:20.239053 containerd[1560]: time="2025-12-12T18:56:20.239027298Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.047718592s" Dec 12 18:56:20.239092 containerd[1560]: time="2025-12-12T18:56:20.239056219Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:56:20.241950 containerd[1560]: time="2025-12-12T18:56:20.241928711Z" level=info msg="CreateContainer within sandbox \"6f7cbc9c5227bd42aa35174a327ab6aa48ab7dadcc7e1b5b85b15d2f76c30e08\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:56:20.247301 containerd[1560]: time="2025-12-12T18:56:20.247282017Z" level=info msg="Container 1cea973acc25645354db3246ff4857582fab77f6e65aa357ccf71b5ddf586b20: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:20.258308 containerd[1560]: time="2025-12-12T18:56:20.258285281Z" level=info msg="CreateContainer within sandbox \"6f7cbc9c5227bd42aa35174a327ab6aa48ab7dadcc7e1b5b85b15d2f76c30e08\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1cea973acc25645354db3246ff4857582fab77f6e65aa357ccf71b5ddf586b20\"" Dec 12 18:56:20.258913 containerd[1560]: time="2025-12-12T18:56:20.258830818Z" level=info msg="StartContainer for \"1cea973acc25645354db3246ff4857582fab77f6e65aa357ccf71b5ddf586b20\"" Dec 12 18:56:20.259930 containerd[1560]: time="2025-12-12T18:56:20.259907998Z" level=info msg="connecting to shim 1cea973acc25645354db3246ff4857582fab77f6e65aa357ccf71b5ddf586b20" address="unix:///run/containerd/s/a359b2191b89b6e9985ec797c41d4e3170122e8ab64857bca8ff9495456b0ad6" protocol=ttrpc version=3 Dec 12 18:56:20.282593 systemd[1]: Started cri-containerd-1cea973acc25645354db3246ff4857582fab77f6e65aa357ccf71b5ddf586b20.scope - libcontainer container 1cea973acc25645354db3246ff4857582fab77f6e65aa357ccf71b5ddf586b20. Dec 12 18:56:20.311579 containerd[1560]: time="2025-12-12T18:56:20.311447636Z" level=info msg="StartContainer for \"1cea973acc25645354db3246ff4857582fab77f6e65aa357ccf71b5ddf586b20\" returns successfully" Dec 12 18:56:20.551014 kubelet[2723]: E1212 18:56:20.550916 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:20.561979 kubelet[2723]: I1212 18:56:20.560422 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-zrdvv" podStartSLOduration=1.511384548 podStartE2EDuration="3.560395828s" podCreationTimestamp="2025-12-12 18:56:17 +0000 UTC" firstStartedPulling="2025-12-12 18:56:18.190933587 +0000 UTC m=+7.788035995" lastFinishedPulling="2025-12-12 18:56:20.239944867 +0000 UTC m=+9.837047275" observedRunningTime="2025-12-12 18:56:20.559431641 +0000 UTC m=+10.156534059" watchObservedRunningTime="2025-12-12 18:56:20.560395828 +0000 UTC m=+10.157498236" Dec 12 18:56:22.273630 kubelet[2723]: E1212 18:56:22.273599 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:24.716623 update_engine[1534]: I20251212 18:56:24.716564 1534 update_attempter.cc:509] Updating boot flags... Dec 12 18:56:25.532300 sudo[1801]: pam_unix(sudo:session): session closed for user root Dec 12 18:56:25.584490 sshd[1800]: Connection closed by 139.178.68.195 port 44670 Dec 12 18:56:25.585616 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Dec 12 18:56:25.592178 systemd[1]: sshd@6-172.237.134.203:22-139.178.68.195:44670.service: Deactivated successfully. Dec 12 18:56:25.596739 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:56:25.597727 systemd[1]: session-7.scope: Consumed 4.568s CPU time, 235.9M memory peak. Dec 12 18:56:25.602714 systemd-logind[1533]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:56:25.605695 systemd-logind[1533]: Removed session 7. Dec 12 18:56:25.833673 kubelet[2723]: E1212 18:56:25.833523 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:26.563440 kubelet[2723]: E1212 18:56:26.563046 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:29.846493 systemd[1]: Created slice kubepods-besteffort-podc21373b2_8f37_4213_8404_3c3ea8e145f5.slice - libcontainer container kubepods-besteffort-podc21373b2_8f37_4213_8404_3c3ea8e145f5.slice. Dec 12 18:56:29.911105 kubelet[2723]: I1212 18:56:29.911072 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c21373b2-8f37-4213-8404-3c3ea8e145f5-tigera-ca-bundle\") pod \"calico-typha-84d5d8d95f-h4c29\" (UID: \"c21373b2-8f37-4213-8404-3c3ea8e145f5\") " pod="calico-system/calico-typha-84d5d8d95f-h4c29" Dec 12 18:56:29.911105 kubelet[2723]: I1212 18:56:29.911108 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c21373b2-8f37-4213-8404-3c3ea8e145f5-typha-certs\") pod \"calico-typha-84d5d8d95f-h4c29\" (UID: \"c21373b2-8f37-4213-8404-3c3ea8e145f5\") " pod="calico-system/calico-typha-84d5d8d95f-h4c29" Dec 12 18:56:29.911605 kubelet[2723]: I1212 18:56:29.911129 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tcr9\" (UniqueName: \"kubernetes.io/projected/c21373b2-8f37-4213-8404-3c3ea8e145f5-kube-api-access-2tcr9\") pod \"calico-typha-84d5d8d95f-h4c29\" (UID: \"c21373b2-8f37-4213-8404-3c3ea8e145f5\") " pod="calico-system/calico-typha-84d5d8d95f-h4c29" Dec 12 18:56:30.045889 systemd[1]: Created slice kubepods-besteffort-pod3edbb975_3692_4f18_aca3_ee13c37023e9.slice - libcontainer container kubepods-besteffort-pod3edbb975_3692_4f18_aca3_ee13c37023e9.slice. Dec 12 18:56:30.112706 kubelet[2723]: I1212 18:56:30.112526 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3edbb975-3692-4f18-aca3-ee13c37023e9-flexvol-driver-host\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.112706 kubelet[2723]: I1212 18:56:30.112561 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3edbb975-3692-4f18-aca3-ee13c37023e9-cni-log-dir\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.112706 kubelet[2723]: I1212 18:56:30.112576 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3edbb975-3692-4f18-aca3-ee13c37023e9-var-run-calico\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.112706 kubelet[2723]: I1212 18:56:30.112592 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3edbb975-3692-4f18-aca3-ee13c37023e9-xtables-lock\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.112706 kubelet[2723]: I1212 18:56:30.112607 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3edbb975-3692-4f18-aca3-ee13c37023e9-node-certs\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.112904 kubelet[2723]: I1212 18:56:30.112621 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3edbb975-3692-4f18-aca3-ee13c37023e9-tigera-ca-bundle\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.112904 kubelet[2723]: I1212 18:56:30.112635 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plv97\" (UniqueName: \"kubernetes.io/projected/3edbb975-3692-4f18-aca3-ee13c37023e9-kube-api-access-plv97\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.112904 kubelet[2723]: I1212 18:56:30.112650 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3edbb975-3692-4f18-aca3-ee13c37023e9-cni-bin-dir\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.112904 kubelet[2723]: I1212 18:56:30.112665 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3edbb975-3692-4f18-aca3-ee13c37023e9-lib-modules\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.112904 kubelet[2723]: I1212 18:56:30.112677 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3edbb975-3692-4f18-aca3-ee13c37023e9-policysync\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.113009 kubelet[2723]: I1212 18:56:30.112690 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3edbb975-3692-4f18-aca3-ee13c37023e9-var-lib-calico\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.113009 kubelet[2723]: I1212 18:56:30.112703 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3edbb975-3692-4f18-aca3-ee13c37023e9-cni-net-dir\") pod \"calico-node-kk4pg\" (UID: \"3edbb975-3692-4f18-aca3-ee13c37023e9\") " pod="calico-system/calico-node-kk4pg" Dec 12 18:56:30.154728 kubelet[2723]: E1212 18:56:30.154607 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:30.156385 containerd[1560]: time="2025-12-12T18:56:30.155533953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84d5d8d95f-h4c29,Uid:c21373b2-8f37-4213-8404-3c3ea8e145f5,Namespace:calico-system,Attempt:0,}" Dec 12 18:56:30.173494 containerd[1560]: time="2025-12-12T18:56:30.173082484Z" level=info msg="connecting to shim d0a16e70352d66bf77ea0ed9bb17649535ac5b67641aa7ee4a2e75e49575b662" address="unix:///run/containerd/s/5594f0779735799aaae7caacd8cee64070811b275397a67dfc7927c7264a5005" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:30.202813 systemd[1]: Started cri-containerd-d0a16e70352d66bf77ea0ed9bb17649535ac5b67641aa7ee4a2e75e49575b662.scope - libcontainer container d0a16e70352d66bf77ea0ed9bb17649535ac5b67641aa7ee4a2e75e49575b662. Dec 12 18:56:30.222230 kubelet[2723]: E1212 18:56:30.222192 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.222230 kubelet[2723]: W1212 18:56:30.222214 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.222230 kubelet[2723]: E1212 18:56:30.222233 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.225996 kubelet[2723]: E1212 18:56:30.223159 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.225996 kubelet[2723]: W1212 18:56:30.223172 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.225996 kubelet[2723]: E1212 18:56:30.223184 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.235984 kubelet[2723]: E1212 18:56:30.235936 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:56:30.242555 kubelet[2723]: E1212 18:56:30.242148 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.242555 kubelet[2723]: W1212 18:56:30.242553 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.242808 kubelet[2723]: E1212 18:56:30.242574 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.309655 kubelet[2723]: E1212 18:56:30.309621 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.309655 kubelet[2723]: W1212 18:56:30.309647 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.309655 kubelet[2723]: E1212 18:56:30.309666 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.309860 kubelet[2723]: E1212 18:56:30.309846 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.309860 kubelet[2723]: W1212 18:56:30.309854 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.309900 kubelet[2723]: E1212 18:56:30.309862 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.310452 kubelet[2723]: E1212 18:56:30.310033 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.310452 kubelet[2723]: W1212 18:56:30.310043 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.310452 kubelet[2723]: E1212 18:56:30.310052 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.310452 kubelet[2723]: E1212 18:56:30.310452 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.310579 kubelet[2723]: W1212 18:56:30.310488 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.310579 kubelet[2723]: E1212 18:56:30.310497 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.310987 kubelet[2723]: E1212 18:56:30.310876 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.310987 kubelet[2723]: W1212 18:56:30.310889 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.310987 kubelet[2723]: E1212 18:56:30.310921 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.311526 kubelet[2723]: E1212 18:56:30.311437 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.311526 kubelet[2723]: W1212 18:56:30.311450 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.311526 kubelet[2723]: E1212 18:56:30.311496 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.311903 kubelet[2723]: E1212 18:56:30.311886 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.311903 kubelet[2723]: W1212 18:56:30.311899 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.312233 kubelet[2723]: E1212 18:56:30.311907 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.312413 kubelet[2723]: E1212 18:56:30.312396 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.312413 kubelet[2723]: W1212 18:56:30.312410 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.312522 kubelet[2723]: E1212 18:56:30.312419 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.312928 kubelet[2723]: E1212 18:56:30.312864 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.312928 kubelet[2723]: W1212 18:56:30.312874 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.312928 kubelet[2723]: E1212 18:56:30.312883 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.313225 kubelet[2723]: E1212 18:56:30.313208 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.313225 kubelet[2723]: W1212 18:56:30.313222 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.313404 kubelet[2723]: E1212 18:56:30.313230 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.313681 kubelet[2723]: E1212 18:56:30.313662 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.313681 kubelet[2723]: W1212 18:56:30.313678 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.313967 kubelet[2723]: E1212 18:56:30.313687 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.314094 kubelet[2723]: E1212 18:56:30.314005 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.314094 kubelet[2723]: W1212 18:56:30.314017 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.314094 kubelet[2723]: E1212 18:56:30.314025 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.314765 kubelet[2723]: E1212 18:56:30.314444 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.314765 kubelet[2723]: W1212 18:56:30.314452 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.314765 kubelet[2723]: E1212 18:56:30.314485 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.315286 kubelet[2723]: E1212 18:56:30.314866 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.315286 kubelet[2723]: W1212 18:56:30.314874 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.315286 kubelet[2723]: E1212 18:56:30.314883 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.315286 kubelet[2723]: E1212 18:56:30.315196 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.315286 kubelet[2723]: W1212 18:56:30.315204 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.315286 kubelet[2723]: E1212 18:56:30.315212 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.315652 kubelet[2723]: E1212 18:56:30.315616 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.315652 kubelet[2723]: W1212 18:56:30.315630 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.315652 kubelet[2723]: E1212 18:56:30.315639 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.316302 kubelet[2723]: E1212 18:56:30.316262 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.316302 kubelet[2723]: W1212 18:56:30.316298 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.316356 kubelet[2723]: E1212 18:56:30.316307 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.316585 kubelet[2723]: E1212 18:56:30.316566 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.316585 kubelet[2723]: W1212 18:56:30.316578 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.316585 kubelet[2723]: E1212 18:56:30.316587 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.316862 kubelet[2723]: E1212 18:56:30.316847 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.316862 kubelet[2723]: W1212 18:56:30.316860 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.316862 kubelet[2723]: E1212 18:56:30.316868 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.317691 kubelet[2723]: E1212 18:56:30.317674 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.317691 kubelet[2723]: W1212 18:56:30.317688 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.317861 kubelet[2723]: E1212 18:56:30.317696 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.318249 kubelet[2723]: E1212 18:56:30.318232 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.318249 kubelet[2723]: W1212 18:56:30.318247 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.318446 kubelet[2723]: E1212 18:56:30.318256 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.318446 kubelet[2723]: I1212 18:56:30.318365 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7adfcf36-f09b-4802-a329-cb264c08cc5c-varrun\") pod \"csi-node-driver-xcxxg\" (UID: \"7adfcf36-f09b-4802-a329-cb264c08cc5c\") " pod="calico-system/csi-node-driver-xcxxg" Dec 12 18:56:30.318862 kubelet[2723]: E1212 18:56:30.318830 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.318862 kubelet[2723]: W1212 18:56:30.318845 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.318862 kubelet[2723]: E1212 18:56:30.318854 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.319496 kubelet[2723]: I1212 18:56:30.318971 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7adfcf36-f09b-4802-a329-cb264c08cc5c-kubelet-dir\") pod \"csi-node-driver-xcxxg\" (UID: \"7adfcf36-f09b-4802-a329-cb264c08cc5c\") " pod="calico-system/csi-node-driver-xcxxg" Dec 12 18:56:30.319994 kubelet[2723]: E1212 18:56:30.319865 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.319994 kubelet[2723]: W1212 18:56:30.319983 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.319994 kubelet[2723]: E1212 18:56:30.319992 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.320091 kubelet[2723]: I1212 18:56:30.320017 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7adfcf36-f09b-4802-a329-cb264c08cc5c-registration-dir\") pod \"csi-node-driver-xcxxg\" (UID: \"7adfcf36-f09b-4802-a329-cb264c08cc5c\") " pod="calico-system/csi-node-driver-xcxxg" Dec 12 18:56:30.320731 kubelet[2723]: E1212 18:56:30.320715 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.320731 kubelet[2723]: W1212 18:56:30.320728 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.321770 kubelet[2723]: E1212 18:56:30.320737 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.321770 kubelet[2723]: E1212 18:56:30.321540 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.321770 kubelet[2723]: W1212 18:56:30.321548 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.321770 kubelet[2723]: E1212 18:56:30.321744 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.322222 kubelet[2723]: E1212 18:56:30.322204 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.322366 kubelet[2723]: W1212 18:56:30.322220 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.322366 kubelet[2723]: E1212 18:56:30.322254 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.322975 kubelet[2723]: E1212 18:56:30.322750 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.322975 kubelet[2723]: W1212 18:56:30.322758 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.322975 kubelet[2723]: E1212 18:56:30.322766 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.322975 kubelet[2723]: I1212 18:56:30.322912 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k5c5\" (UniqueName: \"kubernetes.io/projected/7adfcf36-f09b-4802-a329-cb264c08cc5c-kube-api-access-5k5c5\") pod \"csi-node-driver-xcxxg\" (UID: \"7adfcf36-f09b-4802-a329-cb264c08cc5c\") " pod="calico-system/csi-node-driver-xcxxg" Dec 12 18:56:30.323418 kubelet[2723]: E1212 18:56:30.323401 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.323418 kubelet[2723]: W1212 18:56:30.323414 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.323500 kubelet[2723]: E1212 18:56:30.323423 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.323670 kubelet[2723]: E1212 18:56:30.323650 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.323670 kubelet[2723]: W1212 18:56:30.323663 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.323670 kubelet[2723]: E1212 18:56:30.323671 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.323947 kubelet[2723]: E1212 18:56:30.323895 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.323947 kubelet[2723]: W1212 18:56:30.323903 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.323947 kubelet[2723]: E1212 18:56:30.323910 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.324813 kubelet[2723]: E1212 18:56:30.324520 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.324813 kubelet[2723]: W1212 18:56:30.324531 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.324813 kubelet[2723]: E1212 18:56:30.324539 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.324813 kubelet[2723]: E1212 18:56:30.324787 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.324813 kubelet[2723]: W1212 18:56:30.324795 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.324813 kubelet[2723]: E1212 18:56:30.324803 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.324952 kubelet[2723]: I1212 18:56:30.324828 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7adfcf36-f09b-4802-a329-cb264c08cc5c-socket-dir\") pod \"csi-node-driver-xcxxg\" (UID: \"7adfcf36-f09b-4802-a329-cb264c08cc5c\") " pod="calico-system/csi-node-driver-xcxxg" Dec 12 18:56:30.325324 kubelet[2723]: E1212 18:56:30.325019 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.325324 kubelet[2723]: W1212 18:56:30.325031 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.325324 kubelet[2723]: E1212 18:56:30.325040 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.325394 kubelet[2723]: E1212 18:56:30.325365 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.325394 kubelet[2723]: W1212 18:56:30.325373 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.325394 kubelet[2723]: E1212 18:56:30.325381 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.325739 kubelet[2723]: E1212 18:56:30.325584 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.325739 kubelet[2723]: W1212 18:56:30.325603 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.325739 kubelet[2723]: E1212 18:56:30.325611 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.332077 containerd[1560]: time="2025-12-12T18:56:30.331979594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84d5d8d95f-h4c29,Uid:c21373b2-8f37-4213-8404-3c3ea8e145f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0a16e70352d66bf77ea0ed9bb17649535ac5b67641aa7ee4a2e75e49575b662\"" Dec 12 18:56:30.333350 kubelet[2723]: E1212 18:56:30.333326 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:30.334455 containerd[1560]: time="2025-12-12T18:56:30.334436717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:56:30.351905 kubelet[2723]: E1212 18:56:30.351862 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:30.353060 containerd[1560]: time="2025-12-12T18:56:30.352994690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kk4pg,Uid:3edbb975-3692-4f18-aca3-ee13c37023e9,Namespace:calico-system,Attempt:0,}" Dec 12 18:56:30.372963 containerd[1560]: time="2025-12-12T18:56:30.371444937Z" level=info msg="connecting to shim 09e5650c20a2633b12455f0a39da3239d4dc82f713f0da4b20c31b65d76cd803" address="unix:///run/containerd/s/17e9d423db30b5bf8ba0d3797a5c3338a1ea0dda4b133977ad98eb1b359d6277" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:30.400603 systemd[1]: Started cri-containerd-09e5650c20a2633b12455f0a39da3239d4dc82f713f0da4b20c31b65d76cd803.scope - libcontainer container 09e5650c20a2633b12455f0a39da3239d4dc82f713f0da4b20c31b65d76cd803. Dec 12 18:56:30.427510 kubelet[2723]: E1212 18:56:30.427405 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.427510 kubelet[2723]: W1212 18:56:30.427423 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.427510 kubelet[2723]: E1212 18:56:30.427472 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.427851 kubelet[2723]: E1212 18:56:30.427736 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.427851 kubelet[2723]: W1212 18:56:30.427834 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.427851 kubelet[2723]: E1212 18:56:30.427844 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.428291 kubelet[2723]: E1212 18:56:30.428261 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.428291 kubelet[2723]: W1212 18:56:30.428272 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.428291 kubelet[2723]: E1212 18:56:30.428281 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.428562 kubelet[2723]: E1212 18:56:30.428547 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.428562 kubelet[2723]: W1212 18:56:30.428560 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.428562 kubelet[2723]: E1212 18:56:30.428569 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.428828 kubelet[2723]: E1212 18:56:30.428814 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.428828 kubelet[2723]: W1212 18:56:30.428826 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.428929 kubelet[2723]: E1212 18:56:30.428834 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.429149 kubelet[2723]: E1212 18:56:30.429117 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.429149 kubelet[2723]: W1212 18:56:30.429130 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.429149 kubelet[2723]: E1212 18:56:30.429139 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.429405 kubelet[2723]: E1212 18:56:30.429367 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.429405 kubelet[2723]: W1212 18:56:30.429382 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.429405 kubelet[2723]: E1212 18:56:30.429391 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.429749 kubelet[2723]: E1212 18:56:30.429676 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.429749 kubelet[2723]: W1212 18:56:30.429686 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.429749 kubelet[2723]: E1212 18:56:30.429694 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.430008 kubelet[2723]: E1212 18:56:30.429941 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.430008 kubelet[2723]: W1212 18:56:30.429951 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.430008 kubelet[2723]: E1212 18:56:30.429959 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.430113 containerd[1560]: time="2025-12-12T18:56:30.429961766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kk4pg,Uid:3edbb975-3692-4f18-aca3-ee13c37023e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"09e5650c20a2633b12455f0a39da3239d4dc82f713f0da4b20c31b65d76cd803\"" Dec 12 18:56:30.430256 kubelet[2723]: E1212 18:56:30.430226 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.430287 kubelet[2723]: W1212 18:56:30.430267 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.430287 kubelet[2723]: E1212 18:56:30.430277 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.430585 kubelet[2723]: E1212 18:56:30.430521 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.430585 kubelet[2723]: W1212 18:56:30.430531 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.430585 kubelet[2723]: E1212 18:56:30.430539 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.430774 kubelet[2723]: E1212 18:56:30.430752 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.430774 kubelet[2723]: W1212 18:56:30.430764 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.430774 kubelet[2723]: E1212 18:56:30.430772 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.431140 kubelet[2723]: E1212 18:56:30.431127 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.431140 kubelet[2723]: W1212 18:56:30.431138 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.431223 kubelet[2723]: E1212 18:56:30.431147 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.431599 kubelet[2723]: E1212 18:56:30.431578 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.431599 kubelet[2723]: W1212 18:56:30.431591 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.431649 kubelet[2723]: E1212 18:56:30.431627 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.431848 kubelet[2723]: E1212 18:56:30.431806 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:30.432003 kubelet[2723]: E1212 18:56:30.431986 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.432003 kubelet[2723]: W1212 18:56:30.431998 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.432048 kubelet[2723]: E1212 18:56:30.432008 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.432545 kubelet[2723]: E1212 18:56:30.432527 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.432545 kubelet[2723]: W1212 18:56:30.432541 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.432875 kubelet[2723]: E1212 18:56:30.432550 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.432929 kubelet[2723]: E1212 18:56:30.432897 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.432929 kubelet[2723]: W1212 18:56:30.432908 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.432929 kubelet[2723]: E1212 18:56:30.432917 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.433385 kubelet[2723]: E1212 18:56:30.433367 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.433385 kubelet[2723]: W1212 18:56:30.433382 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.433580 kubelet[2723]: E1212 18:56:30.433391 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.433753 kubelet[2723]: E1212 18:56:30.433737 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.433753 kubelet[2723]: W1212 18:56:30.433750 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.433935 kubelet[2723]: E1212 18:56:30.433758 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.434236 kubelet[2723]: E1212 18:56:30.434224 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.434236 kubelet[2723]: W1212 18:56:30.434235 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.434309 kubelet[2723]: E1212 18:56:30.434244 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.434736 kubelet[2723]: E1212 18:56:30.434723 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.434736 kubelet[2723]: W1212 18:56:30.434734 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.434736 kubelet[2723]: E1212 18:56:30.434742 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.435068 kubelet[2723]: E1212 18:56:30.435037 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.435068 kubelet[2723]: W1212 18:56:30.435050 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.435068 kubelet[2723]: E1212 18:56:30.435059 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.435392 kubelet[2723]: E1212 18:56:30.435351 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.435392 kubelet[2723]: W1212 18:56:30.435385 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.435392 kubelet[2723]: E1212 18:56:30.435394 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.436090 kubelet[2723]: E1212 18:56:30.435833 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.436090 kubelet[2723]: W1212 18:56:30.435841 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.436090 kubelet[2723]: E1212 18:56:30.435850 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.436303 kubelet[2723]: E1212 18:56:30.436269 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.436303 kubelet[2723]: W1212 18:56:30.436302 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.436349 kubelet[2723]: E1212 18:56:30.436312 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:30.446563 kubelet[2723]: E1212 18:56:30.446534 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:30.446563 kubelet[2723]: W1212 18:56:30.446549 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:30.446563 kubelet[2723]: E1212 18:56:30.446560 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:31.159923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2281009380.mount: Deactivated successfully. Dec 12 18:56:31.493965 kubelet[2723]: E1212 18:56:31.493924 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:56:31.759976 containerd[1560]: time="2025-12-12T18:56:31.759355081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:31.759976 containerd[1560]: time="2025-12-12T18:56:31.759901200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 12 18:56:31.760523 containerd[1560]: time="2025-12-12T18:56:31.760500471Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:31.761837 containerd[1560]: time="2025-12-12T18:56:31.761819510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:31.762313 containerd[1560]: time="2025-12-12T18:56:31.762286592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.427807276s" Dec 12 18:56:31.762346 containerd[1560]: time="2025-12-12T18:56:31.762314178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:56:31.763958 containerd[1560]: time="2025-12-12T18:56:31.763759104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:56:31.779743 containerd[1560]: time="2025-12-12T18:56:31.779673015Z" level=info msg="CreateContainer within sandbox \"d0a16e70352d66bf77ea0ed9bb17649535ac5b67641aa7ee4a2e75e49575b662\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:56:31.786486 containerd[1560]: time="2025-12-12T18:56:31.786359497Z" level=info msg="Container 3e322b69a4b7b677a693b8f137cce4c3e4ec4f065e518c0566ceba106fcfa1f6: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:31.793130 containerd[1560]: time="2025-12-12T18:56:31.793108503Z" level=info msg="CreateContainer within sandbox \"d0a16e70352d66bf77ea0ed9bb17649535ac5b67641aa7ee4a2e75e49575b662\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3e322b69a4b7b677a693b8f137cce4c3e4ec4f065e518c0566ceba106fcfa1f6\"" Dec 12 18:56:31.796897 containerd[1560]: time="2025-12-12T18:56:31.796856983Z" level=info msg="StartContainer for \"3e322b69a4b7b677a693b8f137cce4c3e4ec4f065e518c0566ceba106fcfa1f6\"" Dec 12 18:56:31.800146 containerd[1560]: time="2025-12-12T18:56:31.799813770Z" level=info msg="connecting to shim 3e322b69a4b7b677a693b8f137cce4c3e4ec4f065e518c0566ceba106fcfa1f6" address="unix:///run/containerd/s/5594f0779735799aaae7caacd8cee64070811b275397a67dfc7927c7264a5005" protocol=ttrpc version=3 Dec 12 18:56:31.823721 systemd[1]: Started cri-containerd-3e322b69a4b7b677a693b8f137cce4c3e4ec4f065e518c0566ceba106fcfa1f6.scope - libcontainer container 3e322b69a4b7b677a693b8f137cce4c3e4ec4f065e518c0566ceba106fcfa1f6. Dec 12 18:56:31.882235 containerd[1560]: time="2025-12-12T18:56:31.882203751Z" level=info msg="StartContainer for \"3e322b69a4b7b677a693b8f137cce4c3e4ec4f065e518c0566ceba106fcfa1f6\" returns successfully" Dec 12 18:56:32.582429 kubelet[2723]: E1212 18:56:32.581556 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:32.591829 containerd[1560]: time="2025-12-12T18:56:32.591799842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:32.593142 containerd[1560]: time="2025-12-12T18:56:32.592498098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 12 18:56:32.593350 containerd[1560]: time="2025-12-12T18:56:32.593316809Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:32.596936 containerd[1560]: time="2025-12-12T18:56:32.596909480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:32.597942 containerd[1560]: time="2025-12-12T18:56:32.597607205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 833.825006ms" Dec 12 18:56:32.597942 containerd[1560]: time="2025-12-12T18:56:32.597634241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:56:32.601260 containerd[1560]: time="2025-12-12T18:56:32.601229472Z" level=info msg="CreateContainer within sandbox \"09e5650c20a2633b12455f0a39da3239d4dc82f713f0da4b20c31b65d76cd803\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:56:32.606819 containerd[1560]: time="2025-12-12T18:56:32.606800867Z" level=info msg="Container 347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:32.630780 containerd[1560]: time="2025-12-12T18:56:32.630747490Z" level=info msg="CreateContainer within sandbox \"09e5650c20a2633b12455f0a39da3239d4dc82f713f0da4b20c31b65d76cd803\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554\"" Dec 12 18:56:32.631605 containerd[1560]: time="2025-12-12T18:56:32.631583235Z" level=info msg="StartContainer for \"347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554\"" Dec 12 18:56:32.633275 containerd[1560]: time="2025-12-12T18:56:32.633088030Z" level=info msg="connecting to shim 347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554" address="unix:///run/containerd/s/17e9d423db30b5bf8ba0d3797a5c3338a1ea0dda4b133977ad98eb1b359d6277" protocol=ttrpc version=3 Dec 12 18:56:32.637962 kubelet[2723]: E1212 18:56:32.637916 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.637962 kubelet[2723]: W1212 18:56:32.637933 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.637962 kubelet[2723]: E1212 18:56:32.637950 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.638953 kubelet[2723]: E1212 18:56:32.638127 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.638953 kubelet[2723]: W1212 18:56:32.638134 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.638953 kubelet[2723]: E1212 18:56:32.638142 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.638953 kubelet[2723]: E1212 18:56:32.638291 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.638953 kubelet[2723]: W1212 18:56:32.638298 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.638953 kubelet[2723]: E1212 18:56:32.638306 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.639519 kubelet[2723]: E1212 18:56:32.639052 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.639519 kubelet[2723]: W1212 18:56:32.639061 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.639519 kubelet[2723]: E1212 18:56:32.639070 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.641260 kubelet[2723]: E1212 18:56:32.641022 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.641260 kubelet[2723]: W1212 18:56:32.641033 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.641260 kubelet[2723]: E1212 18:56:32.641043 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.641415 kubelet[2723]: E1212 18:56:32.641277 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.641415 kubelet[2723]: W1212 18:56:32.641285 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.641415 kubelet[2723]: E1212 18:56:32.641293 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.641717 kubelet[2723]: E1212 18:56:32.641481 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.641717 kubelet[2723]: W1212 18:56:32.641489 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.641717 kubelet[2723]: E1212 18:56:32.641497 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.642719 kubelet[2723]: E1212 18:56:32.642636 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.642719 kubelet[2723]: W1212 18:56:32.642647 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.642719 kubelet[2723]: E1212 18:56:32.642657 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.643260 kubelet[2723]: E1212 18:56:32.643220 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.643260 kubelet[2723]: W1212 18:56:32.643234 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.643260 kubelet[2723]: E1212 18:56:32.643245 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.643616 kubelet[2723]: E1212 18:56:32.643602 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.644062 kubelet[2723]: W1212 18:56:32.643883 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.644158 kubelet[2723]: E1212 18:56:32.644111 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.644788 kubelet[2723]: E1212 18:56:32.644672 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.645045 kubelet[2723]: W1212 18:56:32.644928 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.645045 kubelet[2723]: E1212 18:56:32.644944 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.645511 kubelet[2723]: E1212 18:56:32.645499 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.645705 kubelet[2723]: W1212 18:56:32.645566 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.645705 kubelet[2723]: E1212 18:56:32.645579 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.645803 kubelet[2723]: E1212 18:56:32.645790 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.645832 kubelet[2723]: W1212 18:56:32.645802 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.645832 kubelet[2723]: E1212 18:56:32.645812 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.646000 kubelet[2723]: E1212 18:56:32.645987 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.646000 kubelet[2723]: W1212 18:56:32.645997 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.646111 kubelet[2723]: E1212 18:56:32.646028 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.646642 kubelet[2723]: E1212 18:56:32.646629 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.646642 kubelet[2723]: W1212 18:56:32.646640 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.646700 kubelet[2723]: E1212 18:56:32.646649 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.646880 kubelet[2723]: E1212 18:56:32.646867 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.646880 kubelet[2723]: W1212 18:56:32.646879 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.646932 kubelet[2723]: E1212 18:56:32.646887 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.647105 kubelet[2723]: E1212 18:56:32.647093 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.647105 kubelet[2723]: W1212 18:56:32.647103 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.647158 kubelet[2723]: E1212 18:56:32.647112 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.647345 kubelet[2723]: E1212 18:56:32.647332 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.647345 kubelet[2723]: W1212 18:56:32.647344 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.647439 kubelet[2723]: E1212 18:56:32.647352 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.648344 kubelet[2723]: E1212 18:56:32.648324 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.648344 kubelet[2723]: W1212 18:56:32.648339 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.648403 kubelet[2723]: E1212 18:56:32.648348 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.648591 kubelet[2723]: E1212 18:56:32.648548 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.648591 kubelet[2723]: W1212 18:56:32.648557 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.648591 kubelet[2723]: E1212 18:56:32.648567 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.648796 kubelet[2723]: E1212 18:56:32.648781 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.648796 kubelet[2723]: W1212 18:56:32.648792 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.648844 kubelet[2723]: E1212 18:56:32.648800 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.649057 kubelet[2723]: E1212 18:56:32.649044 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.649057 kubelet[2723]: W1212 18:56:32.649055 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.649123 kubelet[2723]: E1212 18:56:32.649063 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.649748 kubelet[2723]: E1212 18:56:32.649721 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.649748 kubelet[2723]: W1212 18:56:32.649735 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.649748 kubelet[2723]: E1212 18:56:32.649743 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.650032 kubelet[2723]: E1212 18:56:32.649998 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.650032 kubelet[2723]: W1212 18:56:32.650009 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.650032 kubelet[2723]: E1212 18:56:32.650019 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.650626 kubelet[2723]: E1212 18:56:32.650543 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.650626 kubelet[2723]: W1212 18:56:32.650553 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.650626 kubelet[2723]: E1212 18:56:32.650562 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.650898 kubelet[2723]: E1212 18:56:32.650797 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.650898 kubelet[2723]: W1212 18:56:32.650807 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.650898 kubelet[2723]: E1212 18:56:32.650815 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.652517 kubelet[2723]: E1212 18:56:32.652476 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.652517 kubelet[2723]: W1212 18:56:32.652496 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.652517 kubelet[2723]: E1212 18:56:32.652505 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.653221 kubelet[2723]: E1212 18:56:32.653195 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.653379 kubelet[2723]: W1212 18:56:32.653274 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.653379 kubelet[2723]: E1212 18:56:32.653288 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.653644 kubelet[2723]: E1212 18:56:32.653563 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.653644 kubelet[2723]: W1212 18:56:32.653573 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.653644 kubelet[2723]: E1212 18:56:32.653582 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.653852 kubelet[2723]: E1212 18:56:32.653828 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.653921 kubelet[2723]: W1212 18:56:32.653909 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.653965 kubelet[2723]: E1212 18:56:32.653955 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.654244 kubelet[2723]: E1212 18:56:32.654214 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.654244 kubelet[2723]: W1212 18:56:32.654224 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.654244 kubelet[2723]: E1212 18:56:32.654232 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.654706 kubelet[2723]: E1212 18:56:32.654673 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.654706 kubelet[2723]: W1212 18:56:32.654684 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.654706 kubelet[2723]: E1212 18:56:32.654693 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.655063 kubelet[2723]: E1212 18:56:32.655028 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:56:32.655063 kubelet[2723]: W1212 18:56:32.655038 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:56:32.655063 kubelet[2723]: E1212 18:56:32.655047 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:56:32.666590 systemd[1]: Started cri-containerd-347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554.scope - libcontainer container 347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554. Dec 12 18:56:32.738567 containerd[1560]: time="2025-12-12T18:56:32.738489274Z" level=info msg="StartContainer for \"347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554\" returns successfully" Dec 12 18:56:32.750804 systemd[1]: cri-containerd-347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554.scope: Deactivated successfully. Dec 12 18:56:32.757094 containerd[1560]: time="2025-12-12T18:56:32.757064776Z" level=info msg="received container exit event container_id:\"347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554\" id:\"347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554\" pid:3421 exited_at:{seconds:1765565792 nanos:756590387}" Dec 12 18:56:32.780798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-347137b76912240482c772532939c13357017bb24dfcffa3aea06eca7f175554-rootfs.mount: Deactivated successfully. Dec 12 18:56:33.494535 kubelet[2723]: E1212 18:56:33.494440 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:56:33.587223 kubelet[2723]: I1212 18:56:33.586381 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:56:33.587223 kubelet[2723]: E1212 18:56:33.586773 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:33.587223 kubelet[2723]: E1212 18:56:33.586825 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:33.588619 containerd[1560]: time="2025-12-12T18:56:33.588435312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:56:33.603403 kubelet[2723]: I1212 18:56:33.602694 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84d5d8d95f-h4c29" podStartSLOduration=3.173736261 podStartE2EDuration="4.602681799s" podCreationTimestamp="2025-12-12 18:56:29 +0000 UTC" firstStartedPulling="2025-12-12 18:56:30.334166985 +0000 UTC m=+19.931269393" lastFinishedPulling="2025-12-12 18:56:31.763112523 +0000 UTC m=+21.360214931" observedRunningTime="2025-12-12 18:56:32.596105522 +0000 UTC m=+22.193207930" watchObservedRunningTime="2025-12-12 18:56:33.602681799 +0000 UTC m=+23.199784207" Dec 12 18:56:35.494069 kubelet[2723]: E1212 18:56:35.493933 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:56:35.664734 containerd[1560]: time="2025-12-12T18:56:35.664683869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:35.665456 containerd[1560]: time="2025-12-12T18:56:35.665410572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:56:35.668910 containerd[1560]: time="2025-12-12T18:56:35.666105419Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:35.670452 containerd[1560]: time="2025-12-12T18:56:35.670415148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:35.671106 containerd[1560]: time="2025-12-12T18:56:35.671077790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.082496729s" Dec 12 18:56:35.671148 containerd[1560]: time="2025-12-12T18:56:35.671106975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:56:35.675106 containerd[1560]: time="2025-12-12T18:56:35.675069851Z" level=info msg="CreateContainer within sandbox \"09e5650c20a2633b12455f0a39da3239d4dc82f713f0da4b20c31b65d76cd803\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:56:35.682913 containerd[1560]: time="2025-12-12T18:56:35.682107730Z" level=info msg="Container 705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:35.695640 containerd[1560]: time="2025-12-12T18:56:35.695612254Z" level=info msg="CreateContainer within sandbox \"09e5650c20a2633b12455f0a39da3239d4dc82f713f0da4b20c31b65d76cd803\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0\"" Dec 12 18:56:35.697242 containerd[1560]: time="2025-12-12T18:56:35.697162308Z" level=info msg="StartContainer for \"705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0\"" Dec 12 18:56:35.698672 containerd[1560]: time="2025-12-12T18:56:35.698647440Z" level=info msg="connecting to shim 705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0" address="unix:///run/containerd/s/17e9d423db30b5bf8ba0d3797a5c3338a1ea0dda4b133977ad98eb1b359d6277" protocol=ttrpc version=3 Dec 12 18:56:35.729648 systemd[1]: Started cri-containerd-705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0.scope - libcontainer container 705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0. Dec 12 18:56:35.818564 containerd[1560]: time="2025-12-12T18:56:35.817996751Z" level=info msg="StartContainer for \"705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0\" returns successfully" Dec 12 18:56:36.356908 containerd[1560]: time="2025-12-12T18:56:36.356864856Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:56:36.360248 systemd[1]: cri-containerd-705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0.scope: Deactivated successfully. Dec 12 18:56:36.361298 systemd[1]: cri-containerd-705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0.scope: Consumed 579ms CPU time, 196.4M memory peak, 171.3M written to disk. Dec 12 18:56:36.361710 containerd[1560]: time="2025-12-12T18:56:36.361368157Z" level=info msg="received container exit event container_id:\"705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0\" id:\"705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0\" pid:3483 exited_at:{seconds:1765565796 nanos:361117763}" Dec 12 18:56:36.386942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-705aa2d32377696cef0b127c790fbedfb9fc3fdca384f296819b2503e1160cc0-rootfs.mount: Deactivated successfully. Dec 12 18:56:36.448247 kubelet[2723]: I1212 18:56:36.445502 2723 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 18:56:36.474695 kubelet[2723]: I1212 18:56:36.474228 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfl7m\" (UniqueName: \"kubernetes.io/projected/4791cb7e-5882-4565-8051-3b19c6f35a2b-kube-api-access-bfl7m\") pod \"coredns-66bc5c9577-6nlkm\" (UID: \"4791cb7e-5882-4565-8051-3b19c6f35a2b\") " pod="kube-system/coredns-66bc5c9577-6nlkm" Dec 12 18:56:36.474695 kubelet[2723]: I1212 18:56:36.474491 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4791cb7e-5882-4565-8051-3b19c6f35a2b-config-volume\") pod \"coredns-66bc5c9577-6nlkm\" (UID: \"4791cb7e-5882-4565-8051-3b19c6f35a2b\") " pod="kube-system/coredns-66bc5c9577-6nlkm" Dec 12 18:56:36.485646 systemd[1]: Created slice kubepods-burstable-pod4791cb7e_5882_4565_8051_3b19c6f35a2b.slice - libcontainer container kubepods-burstable-pod4791cb7e_5882_4565_8051_3b19c6f35a2b.slice. Dec 12 18:56:36.491347 kubelet[2723]: E1212 18:56:36.491320 2723 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:172-237-134-203\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-237-134-203' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-ca-bundle\"" type="*v1.ConfigMap" Dec 12 18:56:36.494019 kubelet[2723]: E1212 18:56:36.492832 2723 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:172-237-134-203\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-237-134-203' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-backend-key-pair\"" type="*v1.Secret" Dec 12 18:56:36.503230 systemd[1]: Created slice kubepods-besteffort-pod4a476f3c_7ec6_4ae1_8d1f_4e0cf5a9a78e.slice - libcontainer container kubepods-besteffort-pod4a476f3c_7ec6_4ae1_8d1f_4e0cf5a9a78e.slice. Dec 12 18:56:36.512264 systemd[1]: Created slice kubepods-besteffort-podf4996cbf_b45a_424a_8397_b3ebce94b347.slice - libcontainer container kubepods-besteffort-podf4996cbf_b45a_424a_8397_b3ebce94b347.slice. Dec 12 18:56:36.526824 systemd[1]: Created slice kubepods-besteffort-podb9c97883_cc24_4c44_982c_86a4cdeab0b3.slice - libcontainer container kubepods-besteffort-podb9c97883_cc24_4c44_982c_86a4cdeab0b3.slice. Dec 12 18:56:36.541202 systemd[1]: Created slice kubepods-burstable-pod831d14fd_b949_458a_b2cb_c437fcbbb619.slice - libcontainer container kubepods-burstable-pod831d14fd_b949_458a_b2cb_c437fcbbb619.slice. Dec 12 18:56:36.548763 systemd[1]: Created slice kubepods-besteffort-podc6117e7e_1835_4bc6_967b_fc9429542c7a.slice - libcontainer container kubepods-besteffort-podc6117e7e_1835_4bc6_967b_fc9429542c7a.slice. Dec 12 18:56:36.558353 systemd[1]: Created slice kubepods-besteffort-pod7a1fbc12_082d_4cf2_b63a_aaa492c3ca96.slice - libcontainer container kubepods-besteffort-pod7a1fbc12_082d_4cf2_b63a_aaa492c3ca96.slice. Dec 12 18:56:36.575612 kubelet[2723]: I1212 18:56:36.575573 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48wsw\" (UniqueName: \"kubernetes.io/projected/831d14fd-b949-458a-b2cb-c437fcbbb619-kube-api-access-48wsw\") pod \"coredns-66bc5c9577-2gjxl\" (UID: \"831d14fd-b949-458a-b2cb-c437fcbbb619\") " pod="kube-system/coredns-66bc5c9577-2gjxl" Dec 12 18:56:36.576639 kubelet[2723]: I1212 18:56:36.575619 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mklgz\" (UniqueName: \"kubernetes.io/projected/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-kube-api-access-mklgz\") pod \"whisker-7fdc746644-rsg6h\" (UID: \"4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e\") " pod="calico-system/whisker-7fdc746644-rsg6h" Dec 12 18:56:36.576639 kubelet[2723]: I1212 18:56:36.575662 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a1fbc12-082d-4cf2-b63a-aaa492c3ca96-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-2wmh6\" (UID: \"7a1fbc12-082d-4cf2-b63a-aaa492c3ca96\") " pod="calico-system/goldmane-7c778bb748-2wmh6" Dec 12 18:56:36.576639 kubelet[2723]: I1212 18:56:36.575685 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbsm2\" (UniqueName: \"kubernetes.io/projected/f4996cbf-b45a-424a-8397-b3ebce94b347-kube-api-access-hbsm2\") pod \"calico-apiserver-568b9b9d99-4srkk\" (UID: \"f4996cbf-b45a-424a-8397-b3ebce94b347\") " pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" Dec 12 18:56:36.576639 kubelet[2723]: I1212 18:56:36.575731 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lbdb\" (UniqueName: \"kubernetes.io/projected/b9c97883-cc24-4c44-982c-86a4cdeab0b3-kube-api-access-7lbdb\") pod \"calico-kube-controllers-684d7d59f5-x5wzd\" (UID: \"b9c97883-cc24-4c44-982c-86a4cdeab0b3\") " pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" Dec 12 18:56:36.576639 kubelet[2723]: I1212 18:56:36.575752 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7a1fbc12-082d-4cf2-b63a-aaa492c3ca96-goldmane-key-pair\") pod \"goldmane-7c778bb748-2wmh6\" (UID: \"7a1fbc12-082d-4cf2-b63a-aaa492c3ca96\") " pod="calico-system/goldmane-7c778bb748-2wmh6" Dec 12 18:56:36.576784 kubelet[2723]: I1212 18:56:36.575778 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzdc\" (UniqueName: \"kubernetes.io/projected/7a1fbc12-082d-4cf2-b63a-aaa492c3ca96-kube-api-access-bnzdc\") pod \"goldmane-7c778bb748-2wmh6\" (UID: \"7a1fbc12-082d-4cf2-b63a-aaa492c3ca96\") " pod="calico-system/goldmane-7c778bb748-2wmh6" Dec 12 18:56:36.576784 kubelet[2723]: I1212 18:56:36.575819 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjfh\" (UniqueName: \"kubernetes.io/projected/c6117e7e-1835-4bc6-967b-fc9429542c7a-kube-api-access-8zjfh\") pod \"calico-apiserver-568b9b9d99-flgkd\" (UID: \"c6117e7e-1835-4bc6-967b-fc9429542c7a\") " pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" Dec 12 18:56:36.576784 kubelet[2723]: I1212 18:56:36.575841 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/831d14fd-b949-458a-b2cb-c437fcbbb619-config-volume\") pod \"coredns-66bc5c9577-2gjxl\" (UID: \"831d14fd-b949-458a-b2cb-c437fcbbb619\") " pod="kube-system/coredns-66bc5c9577-2gjxl" Dec 12 18:56:36.576784 kubelet[2723]: I1212 18:56:36.575867 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9c97883-cc24-4c44-982c-86a4cdeab0b3-tigera-ca-bundle\") pod \"calico-kube-controllers-684d7d59f5-x5wzd\" (UID: \"b9c97883-cc24-4c44-982c-86a4cdeab0b3\") " pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" Dec 12 18:56:36.576784 kubelet[2723]: I1212 18:56:36.575904 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a1fbc12-082d-4cf2-b63a-aaa492c3ca96-config\") pod \"goldmane-7c778bb748-2wmh6\" (UID: \"7a1fbc12-082d-4cf2-b63a-aaa492c3ca96\") " pod="calico-system/goldmane-7c778bb748-2wmh6" Dec 12 18:56:36.576894 kubelet[2723]: I1212 18:56:36.575924 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4996cbf-b45a-424a-8397-b3ebce94b347-calico-apiserver-certs\") pod \"calico-apiserver-568b9b9d99-4srkk\" (UID: \"f4996cbf-b45a-424a-8397-b3ebce94b347\") " pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" Dec 12 18:56:36.576894 kubelet[2723]: I1212 18:56:36.575970 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c6117e7e-1835-4bc6-967b-fc9429542c7a-calico-apiserver-certs\") pod \"calico-apiserver-568b9b9d99-flgkd\" (UID: \"c6117e7e-1835-4bc6-967b-fc9429542c7a\") " pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" Dec 12 18:56:36.576894 kubelet[2723]: I1212 18:56:36.575993 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-whisker-backend-key-pair\") pod \"whisker-7fdc746644-rsg6h\" (UID: \"4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e\") " pod="calico-system/whisker-7fdc746644-rsg6h" Dec 12 18:56:36.576894 kubelet[2723]: I1212 18:56:36.576052 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-whisker-ca-bundle\") pod \"whisker-7fdc746644-rsg6h\" (UID: \"4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e\") " pod="calico-system/whisker-7fdc746644-rsg6h" Dec 12 18:56:36.599899 kubelet[2723]: E1212 18:56:36.599760 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:36.602603 containerd[1560]: time="2025-12-12T18:56:36.602339508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:56:36.796556 kubelet[2723]: E1212 18:56:36.796412 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:36.797177 containerd[1560]: time="2025-12-12T18:56:36.797137550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6nlkm,Uid:4791cb7e-5882-4565-8051-3b19c6f35a2b,Namespace:kube-system,Attempt:0,}" Dec 12 18:56:36.821807 containerd[1560]: time="2025-12-12T18:56:36.821621870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568b9b9d99-4srkk,Uid:f4996cbf-b45a-424a-8397-b3ebce94b347,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:56:36.842704 containerd[1560]: time="2025-12-12T18:56:36.842394648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684d7d59f5-x5wzd,Uid:b9c97883-cc24-4c44-982c-86a4cdeab0b3,Namespace:calico-system,Attempt:0,}" Dec 12 18:56:36.847013 kubelet[2723]: E1212 18:56:36.846791 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:36.848292 containerd[1560]: time="2025-12-12T18:56:36.848270760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2gjxl,Uid:831d14fd-b949-458a-b2cb-c437fcbbb619,Namespace:kube-system,Attempt:0,}" Dec 12 18:56:36.856655 containerd[1560]: time="2025-12-12T18:56:36.856227437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568b9b9d99-flgkd,Uid:c6117e7e-1835-4bc6-967b-fc9429542c7a,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:56:36.863370 containerd[1560]: time="2025-12-12T18:56:36.863350208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2wmh6,Uid:7a1fbc12-082d-4cf2-b63a-aaa492c3ca96,Namespace:calico-system,Attempt:0,}" Dec 12 18:56:36.946243 containerd[1560]: time="2025-12-12T18:56:36.946200369Z" level=error msg="Failed to destroy network for sandbox \"534350deafc0cce63ace42ae3747ad5cf2ee15dc8fa2ec148801aa65981462e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:36.946808 containerd[1560]: time="2025-12-12T18:56:36.946521225Z" level=error msg="Failed to destroy network for sandbox \"1798af225dc7bf6a9579b6ff00abad73bc77d57fab0a583ff18727e81e87adc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:36.950936 containerd[1560]: time="2025-12-12T18:56:36.950906825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568b9b9d99-4srkk,Uid:f4996cbf-b45a-424a-8397-b3ebce94b347,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"534350deafc0cce63ace42ae3747ad5cf2ee15dc8fa2ec148801aa65981462e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:36.951887 kubelet[2723]: E1212 18:56:36.951852 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534350deafc0cce63ace42ae3747ad5cf2ee15dc8fa2ec148801aa65981462e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:36.952013 kubelet[2723]: E1212 18:56:36.951995 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534350deafc0cce63ace42ae3747ad5cf2ee15dc8fa2ec148801aa65981462e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" Dec 12 18:56:36.952432 kubelet[2723]: E1212 18:56:36.952169 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534350deafc0cce63ace42ae3747ad5cf2ee15dc8fa2ec148801aa65981462e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" Dec 12 18:56:36.952432 kubelet[2723]: E1212 18:56:36.952229 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568b9b9d99-4srkk_calico-apiserver(f4996cbf-b45a-424a-8397-b3ebce94b347)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568b9b9d99-4srkk_calico-apiserver(f4996cbf-b45a-424a-8397-b3ebce94b347)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"534350deafc0cce63ace42ae3747ad5cf2ee15dc8fa2ec148801aa65981462e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:56:36.953072 containerd[1560]: time="2025-12-12T18:56:36.953048341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6nlkm,Uid:4791cb7e-5882-4565-8051-3b19c6f35a2b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1798af225dc7bf6a9579b6ff00abad73bc77d57fab0a583ff18727e81e87adc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:36.953263 kubelet[2723]: E1212 18:56:36.953243 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1798af225dc7bf6a9579b6ff00abad73bc77d57fab0a583ff18727e81e87adc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:36.953772 kubelet[2723]: E1212 18:56:36.953677 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1798af225dc7bf6a9579b6ff00abad73bc77d57fab0a583ff18727e81e87adc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6nlkm" Dec 12 18:56:36.953772 kubelet[2723]: E1212 18:56:36.953696 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1798af225dc7bf6a9579b6ff00abad73bc77d57fab0a583ff18727e81e87adc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6nlkm" Dec 12 18:56:36.953772 kubelet[2723]: E1212 18:56:36.953734 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6nlkm_kube-system(4791cb7e-5882-4565-8051-3b19c6f35a2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6nlkm_kube-system(4791cb7e-5882-4565-8051-3b19c6f35a2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1798af225dc7bf6a9579b6ff00abad73bc77d57fab0a583ff18727e81e87adc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6nlkm" podUID="4791cb7e-5882-4565-8051-3b19c6f35a2b" Dec 12 18:56:37.018354 containerd[1560]: time="2025-12-12T18:56:37.018289423Z" level=error msg="Failed to destroy network for sandbox \"fed3505f71c35324312df298698483b4ff5dfc112368866b39d5c3bb12bc4397\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.018534 containerd[1560]: time="2025-12-12T18:56:37.018488757Z" level=error msg="Failed to destroy network for sandbox \"243c8c27b7c697b6080950dac806e6d11ce08de3f7b9f4c962b463f42dc4de78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.019350 containerd[1560]: time="2025-12-12T18:56:37.019230882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2gjxl,Uid:831d14fd-b949-458a-b2cb-c437fcbbb619,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed3505f71c35324312df298698483b4ff5dfc112368866b39d5c3bb12bc4397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.020061 containerd[1560]: time="2025-12-12T18:56:37.019704712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684d7d59f5-x5wzd,Uid:b9c97883-cc24-4c44-982c-86a4cdeab0b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"243c8c27b7c697b6080950dac806e6d11ce08de3f7b9f4c962b463f42dc4de78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.020122 kubelet[2723]: E1212 18:56:37.019581 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed3505f71c35324312df298698483b4ff5dfc112368866b39d5c3bb12bc4397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.020122 kubelet[2723]: E1212 18:56:37.019631 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed3505f71c35324312df298698483b4ff5dfc112368866b39d5c3bb12bc4397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2gjxl" Dec 12 18:56:37.020122 kubelet[2723]: E1212 18:56:37.019654 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed3505f71c35324312df298698483b4ff5dfc112368866b39d5c3bb12bc4397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2gjxl" Dec 12 18:56:37.020214 kubelet[2723]: E1212 18:56:37.019704 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2gjxl_kube-system(831d14fd-b949-458a-b2cb-c437fcbbb619)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2gjxl_kube-system(831d14fd-b949-458a-b2cb-c437fcbbb619)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fed3505f71c35324312df298698483b4ff5dfc112368866b39d5c3bb12bc4397\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2gjxl" podUID="831d14fd-b949-458a-b2cb-c437fcbbb619" Dec 12 18:56:37.021608 containerd[1560]: time="2025-12-12T18:56:37.020557116Z" level=error msg="Failed to destroy network for sandbox \"fc1a56a7de144d29a59594e3c2e47abd16e5e1ef434db3b9d7ff469fb4637a5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.021608 containerd[1560]: time="2025-12-12T18:56:37.021218087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568b9b9d99-flgkd,Uid:c6117e7e-1835-4bc6-967b-fc9429542c7a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1a56a7de144d29a59594e3c2e47abd16e5e1ef434db3b9d7ff469fb4637a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.021856 kubelet[2723]: E1212 18:56:37.020749 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243c8c27b7c697b6080950dac806e6d11ce08de3f7b9f4c962b463f42dc4de78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.021856 kubelet[2723]: E1212 18:56:37.020774 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243c8c27b7c697b6080950dac806e6d11ce08de3f7b9f4c962b463f42dc4de78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" Dec 12 18:56:37.021856 kubelet[2723]: E1212 18:56:37.020787 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243c8c27b7c697b6080950dac806e6d11ce08de3f7b9f4c962b463f42dc4de78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" Dec 12 18:56:37.021981 kubelet[2723]: E1212 18:56:37.020816 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-684d7d59f5-x5wzd_calico-system(b9c97883-cc24-4c44-982c-86a4cdeab0b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-684d7d59f5-x5wzd_calico-system(b9c97883-cc24-4c44-982c-86a4cdeab0b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"243c8c27b7c697b6080950dac806e6d11ce08de3f7b9f4c962b463f42dc4de78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:56:37.021981 kubelet[2723]: E1212 18:56:37.021357 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1a56a7de144d29a59594e3c2e47abd16e5e1ef434db3b9d7ff469fb4637a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.021981 kubelet[2723]: E1212 18:56:37.021378 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1a56a7de144d29a59594e3c2e47abd16e5e1ef434db3b9d7ff469fb4637a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" Dec 12 18:56:37.022183 kubelet[2723]: E1212 18:56:37.021390 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1a56a7de144d29a59594e3c2e47abd16e5e1ef434db3b9d7ff469fb4637a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" Dec 12 18:56:37.022183 kubelet[2723]: E1212 18:56:37.021416 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568b9b9d99-flgkd_calico-apiserver(c6117e7e-1835-4bc6-967b-fc9429542c7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568b9b9d99-flgkd_calico-apiserver(c6117e7e-1835-4bc6-967b-fc9429542c7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc1a56a7de144d29a59594e3c2e47abd16e5e1ef434db3b9d7ff469fb4637a5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:56:37.031745 containerd[1560]: time="2025-12-12T18:56:37.031712606Z" level=error msg="Failed to destroy network for sandbox \"6b7d9c239e9ac8794b4b5fe07cac075f109c6a92cb683b2292bcf70dddab086f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.032434 containerd[1560]: time="2025-12-12T18:56:37.032406143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2wmh6,Uid:7a1fbc12-082d-4cf2-b63a-aaa492c3ca96,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b7d9c239e9ac8794b4b5fe07cac075f109c6a92cb683b2292bcf70dddab086f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.032632 kubelet[2723]: E1212 18:56:37.032598 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b7d9c239e9ac8794b4b5fe07cac075f109c6a92cb683b2292bcf70dddab086f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.032700 kubelet[2723]: E1212 18:56:37.032648 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b7d9c239e9ac8794b4b5fe07cac075f109c6a92cb683b2292bcf70dddab086f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-2wmh6" Dec 12 18:56:37.032700 kubelet[2723]: E1212 18:56:37.032666 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b7d9c239e9ac8794b4b5fe07cac075f109c6a92cb683b2292bcf70dddab086f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-2wmh6" Dec 12 18:56:37.032772 kubelet[2723]: E1212 18:56:37.032717 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-2wmh6_calico-system(7a1fbc12-082d-4cf2-b63a-aaa492c3ca96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-2wmh6_calico-system(7a1fbc12-082d-4cf2-b63a-aaa492c3ca96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b7d9c239e9ac8794b4b5fe07cac075f109c6a92cb683b2292bcf70dddab086f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:56:37.502417 systemd[1]: Created slice kubepods-besteffort-pod7adfcf36_f09b_4802_a329_cb264c08cc5c.slice - libcontainer container kubepods-besteffort-pod7adfcf36_f09b_4802_a329_cb264c08cc5c.slice. Dec 12 18:56:37.509089 containerd[1560]: time="2025-12-12T18:56:37.509032069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xcxxg,Uid:7adfcf36-f09b-4802-a329-cb264c08cc5c,Namespace:calico-system,Attempt:0,}" Dec 12 18:56:37.596928 containerd[1560]: time="2025-12-12T18:56:37.596885847Z" level=error msg="Failed to destroy network for sandbox \"7672bd96e0b2b23f932b526fc0a9b599c3fc3b89f1bd31677d4a0727fbc5bb18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.598274 containerd[1560]: time="2025-12-12T18:56:37.598234714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xcxxg,Uid:7adfcf36-f09b-4802-a329-cb264c08cc5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7672bd96e0b2b23f932b526fc0a9b599c3fc3b89f1bd31677d4a0727fbc5bb18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.598729 kubelet[2723]: E1212 18:56:37.598435 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7672bd96e0b2b23f932b526fc0a9b599c3fc3b89f1bd31677d4a0727fbc5bb18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.598729 kubelet[2723]: E1212 18:56:37.598528 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7672bd96e0b2b23f932b526fc0a9b599c3fc3b89f1bd31677d4a0727fbc5bb18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xcxxg" Dec 12 18:56:37.598729 kubelet[2723]: E1212 18:56:37.598545 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7672bd96e0b2b23f932b526fc0a9b599c3fc3b89f1bd31677d4a0727fbc5bb18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xcxxg" Dec 12 18:56:37.599191 kubelet[2723]: E1212 18:56:37.598603 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7672bd96e0b2b23f932b526fc0a9b599c3fc3b89f1bd31677d4a0727fbc5bb18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:56:37.711263 containerd[1560]: time="2025-12-12T18:56:37.711226459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fdc746644-rsg6h,Uid:4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e,Namespace:calico-system,Attempt:0,}" Dec 12 18:56:37.780522 containerd[1560]: time="2025-12-12T18:56:37.779947352Z" level=error msg="Failed to destroy network for sandbox \"970d561855e916690ad73302e7edd548244eee1196c64cb52740f4ad786549e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.783312 systemd[1]: run-netns-cni\x2d37dee752\x2d4501\x2d974e\x2db3c6\x2d2b5bdac94dc6.mount: Deactivated successfully. Dec 12 18:56:37.783707 containerd[1560]: time="2025-12-12T18:56:37.783663308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fdc746644-rsg6h,Uid:4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"970d561855e916690ad73302e7edd548244eee1196c64cb52740f4ad786549e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.784435 kubelet[2723]: E1212 18:56:37.784318 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970d561855e916690ad73302e7edd548244eee1196c64cb52740f4ad786549e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:56:37.784636 kubelet[2723]: E1212 18:56:37.784563 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970d561855e916690ad73302e7edd548244eee1196c64cb52740f4ad786549e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fdc746644-rsg6h" Dec 12 18:56:37.784636 kubelet[2723]: E1212 18:56:37.784593 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970d561855e916690ad73302e7edd548244eee1196c64cb52740f4ad786549e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fdc746644-rsg6h" Dec 12 18:56:37.785529 kubelet[2723]: E1212 18:56:37.784852 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7fdc746644-rsg6h_calico-system(4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7fdc746644-rsg6h_calico-system(4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"970d561855e916690ad73302e7edd548244eee1196c64cb52740f4ad786549e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fdc746644-rsg6h" podUID="4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e" Dec 12 18:56:40.664252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2072801388.mount: Deactivated successfully. Dec 12 18:56:40.689042 containerd[1560]: time="2025-12-12T18:56:40.689012749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:40.690033 containerd[1560]: time="2025-12-12T18:56:40.689911434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:56:40.690703 containerd[1560]: time="2025-12-12T18:56:40.690677138Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:40.692159 containerd[1560]: time="2025-12-12T18:56:40.692134877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:56:40.692728 containerd[1560]: time="2025-12-12T18:56:40.692707523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.090336139s" Dec 12 18:56:40.692801 containerd[1560]: time="2025-12-12T18:56:40.692787465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:56:40.712150 containerd[1560]: time="2025-12-12T18:56:40.712121872Z" level=info msg="CreateContainer within sandbox \"09e5650c20a2633b12455f0a39da3239d4dc82f713f0da4b20c31b65d76cd803\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:56:40.717786 containerd[1560]: time="2025-12-12T18:56:40.717767548Z" level=info msg="Container b8c1f1546a7daaed3b2bf9476cefbdcad6668bc1be866c05305d85000143ead2: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:40.725996 containerd[1560]: time="2025-12-12T18:56:40.725972828Z" level=info msg="CreateContainer within sandbox \"09e5650c20a2633b12455f0a39da3239d4dc82f713f0da4b20c31b65d76cd803\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b8c1f1546a7daaed3b2bf9476cefbdcad6668bc1be866c05305d85000143ead2\"" Dec 12 18:56:40.727483 containerd[1560]: time="2025-12-12T18:56:40.727432277Z" level=info msg="StartContainer for \"b8c1f1546a7daaed3b2bf9476cefbdcad6668bc1be866c05305d85000143ead2\"" Dec 12 18:56:40.729568 containerd[1560]: time="2025-12-12T18:56:40.729441618Z" level=info msg="connecting to shim b8c1f1546a7daaed3b2bf9476cefbdcad6668bc1be866c05305d85000143ead2" address="unix:///run/containerd/s/17e9d423db30b5bf8ba0d3797a5c3338a1ea0dda4b133977ad98eb1b359d6277" protocol=ttrpc version=3 Dec 12 18:56:40.775591 systemd[1]: Started cri-containerd-b8c1f1546a7daaed3b2bf9476cefbdcad6668bc1be866c05305d85000143ead2.scope - libcontainer container b8c1f1546a7daaed3b2bf9476cefbdcad6668bc1be866c05305d85000143ead2. Dec 12 18:56:40.869450 containerd[1560]: time="2025-12-12T18:56:40.869363699Z" level=info msg="StartContainer for \"b8c1f1546a7daaed3b2bf9476cefbdcad6668bc1be866c05305d85000143ead2\" returns successfully" Dec 12 18:56:40.958506 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:56:40.958629 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:56:41.206506 kubelet[2723]: I1212 18:56:41.205598 2723 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-whisker-backend-key-pair\") pod \"4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e\" (UID: \"4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e\") " Dec 12 18:56:41.206506 kubelet[2723]: I1212 18:56:41.205638 2723 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mklgz\" (UniqueName: \"kubernetes.io/projected/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-kube-api-access-mklgz\") pod \"4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e\" (UID: \"4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e\") " Dec 12 18:56:41.206506 kubelet[2723]: I1212 18:56:41.205669 2723 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-whisker-ca-bundle\") pod \"4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e\" (UID: \"4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e\") " Dec 12 18:56:41.206506 kubelet[2723]: I1212 18:56:41.206082 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e" (UID: "4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:56:41.212979 kubelet[2723]: I1212 18:56:41.212759 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-kube-api-access-mklgz" (OuterVolumeSpecName: "kube-api-access-mklgz") pod "4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e" (UID: "4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e"). InnerVolumeSpecName "kube-api-access-mklgz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:56:41.213607 kubelet[2723]: I1212 18:56:41.213586 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e" (UID: "4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:56:41.306806 kubelet[2723]: I1212 18:56:41.306772 2723 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-whisker-ca-bundle\") on node \"172-237-134-203\" DevicePath \"\"" Dec 12 18:56:41.307002 kubelet[2723]: I1212 18:56:41.306969 2723 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-whisker-backend-key-pair\") on node \"172-237-134-203\" DevicePath \"\"" Dec 12 18:56:41.307002 kubelet[2723]: I1212 18:56:41.306985 2723 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mklgz\" (UniqueName: \"kubernetes.io/projected/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e-kube-api-access-mklgz\") on node \"172-237-134-203\" DevicePath \"\"" Dec 12 18:56:41.619596 kubelet[2723]: E1212 18:56:41.619195 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:41.623537 systemd[1]: Removed slice kubepods-besteffort-pod4a476f3c_7ec6_4ae1_8d1f_4e0cf5a9a78e.slice - libcontainer container kubepods-besteffort-pod4a476f3c_7ec6_4ae1_8d1f_4e0cf5a9a78e.slice. Dec 12 18:56:41.636038 kubelet[2723]: I1212 18:56:41.635992 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kk4pg" podStartSLOduration=1.376822609 podStartE2EDuration="11.635956729s" podCreationTimestamp="2025-12-12 18:56:30 +0000 UTC" firstStartedPulling="2025-12-12 18:56:30.434318344 +0000 UTC m=+20.031420752" lastFinishedPulling="2025-12-12 18:56:40.693452464 +0000 UTC m=+30.290554872" observedRunningTime="2025-12-12 18:56:41.633692502 +0000 UTC m=+31.230794910" watchObservedRunningTime="2025-12-12 18:56:41.635956729 +0000 UTC m=+31.233059137" Dec 12 18:56:41.668602 systemd[1]: var-lib-kubelet-pods-4a476f3c\x2d7ec6\x2d4ae1\x2d8d1f\x2d4e0cf5a9a78e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:56:41.668713 systemd[1]: var-lib-kubelet-pods-4a476f3c\x2d7ec6\x2d4ae1\x2d8d1f\x2d4e0cf5a9a78e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmklgz.mount: Deactivated successfully. Dec 12 18:56:41.684318 systemd[1]: Created slice kubepods-besteffort-poda91e52ae_48ed_4331_916f_65e4537bb807.slice - libcontainer container kubepods-besteffort-poda91e52ae_48ed_4331_916f_65e4537bb807.slice. Dec 12 18:56:41.810573 kubelet[2723]: I1212 18:56:41.810417 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a91e52ae-48ed-4331-916f-65e4537bb807-whisker-backend-key-pair\") pod \"whisker-86b45df6f8-cmnpq\" (UID: \"a91e52ae-48ed-4331-916f-65e4537bb807\") " pod="calico-system/whisker-86b45df6f8-cmnpq" Dec 12 18:56:41.810573 kubelet[2723]: I1212 18:56:41.810479 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a91e52ae-48ed-4331-916f-65e4537bb807-whisker-ca-bundle\") pod \"whisker-86b45df6f8-cmnpq\" (UID: \"a91e52ae-48ed-4331-916f-65e4537bb807\") " pod="calico-system/whisker-86b45df6f8-cmnpq" Dec 12 18:56:41.810573 kubelet[2723]: I1212 18:56:41.810535 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgbmd\" (UniqueName: \"kubernetes.io/projected/a91e52ae-48ed-4331-916f-65e4537bb807-kube-api-access-cgbmd\") pod \"whisker-86b45df6f8-cmnpq\" (UID: \"a91e52ae-48ed-4331-916f-65e4537bb807\") " pod="calico-system/whisker-86b45df6f8-cmnpq" Dec 12 18:56:41.989511 containerd[1560]: time="2025-12-12T18:56:41.989453647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86b45df6f8-cmnpq,Uid:a91e52ae-48ed-4331-916f-65e4537bb807,Namespace:calico-system,Attempt:0,}" Dec 12 18:56:42.108201 systemd-networkd[1448]: calia27235d4984: Link UP Dec 12 18:56:42.109992 systemd-networkd[1448]: calia27235d4984: Gained carrier Dec 12 18:56:42.122849 containerd[1560]: 2025-12-12 18:56:42.012 [INFO][3815] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:56:42.122849 containerd[1560]: 2025-12-12 18:56:42.047 [INFO][3815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0 whisker-86b45df6f8- calico-system a91e52ae-48ed-4331-916f-65e4537bb807 936 0 2025-12-12 18:56:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86b45df6f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-237-134-203 whisker-86b45df6f8-cmnpq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia27235d4984 [] [] }} ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Namespace="calico-system" Pod="whisker-86b45df6f8-cmnpq" WorkloadEndpoint="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-" Dec 12 18:56:42.122849 containerd[1560]: 2025-12-12 18:56:42.048 [INFO][3815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Namespace="calico-system" Pod="whisker-86b45df6f8-cmnpq" WorkloadEndpoint="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0" Dec 12 18:56:42.122849 containerd[1560]: 2025-12-12 18:56:42.069 [INFO][3825] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" HandleID="k8s-pod-network.9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Workload="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0" Dec 12 18:56:42.123031 containerd[1560]: 2025-12-12 18:56:42.069 [INFO][3825] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" HandleID="k8s-pod-network.9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Workload="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-134-203", "pod":"whisker-86b45df6f8-cmnpq", "timestamp":"2025-12-12 18:56:42.068999873 +0000 UTC"}, Hostname:"172-237-134-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:56:42.123031 containerd[1560]: 2025-12-12 18:56:42.069 [INFO][3825] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:56:42.123031 containerd[1560]: 2025-12-12 18:56:42.069 [INFO][3825] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:56:42.123031 containerd[1560]: 2025-12-12 18:56:42.069 [INFO][3825] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-203' Dec 12 18:56:42.123031 containerd[1560]: 2025-12-12 18:56:42.074 [INFO][3825] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" host="172-237-134-203" Dec 12 18:56:42.123031 containerd[1560]: 2025-12-12 18:56:42.078 [INFO][3825] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-203" Dec 12 18:56:42.123031 containerd[1560]: 2025-12-12 18:56:42.082 [INFO][3825] ipam/ipam.go 511: Trying affinity for 192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:42.123031 containerd[1560]: 2025-12-12 18:56:42.083 [INFO][3825] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:42.123031 containerd[1560]: 2025-12-12 18:56:42.085 [INFO][3825] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:42.123230 containerd[1560]: 2025-12-12 18:56:42.085 [INFO][3825] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" host="172-237-134-203" Dec 12 18:56:42.123230 containerd[1560]: 2025-12-12 18:56:42.087 [INFO][3825] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1 Dec 12 18:56:42.123230 containerd[1560]: 2025-12-12 18:56:42.091 [INFO][3825] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" host="172-237-134-203" Dec 12 18:56:42.123230 containerd[1560]: 2025-12-12 18:56:42.095 [INFO][3825] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.193/26] block=192.168.73.192/26 handle="k8s-pod-network.9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" host="172-237-134-203" Dec 12 18:56:42.123230 containerd[1560]: 2025-12-12 18:56:42.095 [INFO][3825] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.193/26] handle="k8s-pod-network.9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" host="172-237-134-203" Dec 12 18:56:42.123230 containerd[1560]: 2025-12-12 18:56:42.095 [INFO][3825] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:56:42.123230 containerd[1560]: 2025-12-12 18:56:42.095 [INFO][3825] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.193/26] IPv6=[] ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" HandleID="k8s-pod-network.9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Workload="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0" Dec 12 18:56:42.123367 containerd[1560]: 2025-12-12 18:56:42.099 [INFO][3815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Namespace="calico-system" Pod="whisker-86b45df6f8-cmnpq" WorkloadEndpoint="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0", GenerateName:"whisker-86b45df6f8-", Namespace:"calico-system", SelfLink:"", UID:"a91e52ae-48ed-4331-916f-65e4537bb807", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86b45df6f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"", Pod:"whisker-86b45df6f8-cmnpq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.73.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia27235d4984", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:42.123367 containerd[1560]: 2025-12-12 18:56:42.099 [INFO][3815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.193/32] ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Namespace="calico-system" Pod="whisker-86b45df6f8-cmnpq" WorkloadEndpoint="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0" Dec 12 18:56:42.123439 containerd[1560]: 2025-12-12 18:56:42.099 [INFO][3815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia27235d4984 ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Namespace="calico-system" Pod="whisker-86b45df6f8-cmnpq" WorkloadEndpoint="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0" Dec 12 18:56:42.123439 containerd[1560]: 2025-12-12 18:56:42.109 [INFO][3815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Namespace="calico-system" Pod="whisker-86b45df6f8-cmnpq" WorkloadEndpoint="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0" Dec 12 18:56:42.125281 containerd[1560]: 2025-12-12 18:56:42.109 [INFO][3815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Namespace="calico-system" Pod="whisker-86b45df6f8-cmnpq" WorkloadEndpoint="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0", GenerateName:"whisker-86b45df6f8-", Namespace:"calico-system", SelfLink:"", UID:"a91e52ae-48ed-4331-916f-65e4537bb807", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86b45df6f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1", Pod:"whisker-86b45df6f8-cmnpq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.73.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia27235d4984", MAC:"2a:1a:9d:34:8d:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:42.125342 containerd[1560]: 2025-12-12 18:56:42.116 [INFO][3815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" Namespace="calico-system" Pod="whisker-86b45df6f8-cmnpq" WorkloadEndpoint="172--237--134--203-k8s-whisker--86b45df6f8--cmnpq-eth0" Dec 12 18:56:42.166629 containerd[1560]: time="2025-12-12T18:56:42.166586036Z" level=info msg="connecting to shim 9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1" address="unix:///run/containerd/s/39901ddf4a890a686fedf9470a4bc25c41b99399b6842185675c5511efa757f0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:42.200621 systemd[1]: Started cri-containerd-9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1.scope - libcontainer container 9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1. Dec 12 18:56:42.248748 containerd[1560]: time="2025-12-12T18:56:42.248636954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86b45df6f8-cmnpq,Uid:a91e52ae-48ed-4331-916f-65e4537bb807,Namespace:calico-system,Attempt:0,} returns sandbox id \"9452498ee823751ff9a51e63f07cb4631cc6598150a8e4e60896c388676134a1\"" Dec 12 18:56:42.250750 containerd[1560]: time="2025-12-12T18:56:42.250664287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:56:42.374841 containerd[1560]: time="2025-12-12T18:56:42.374792466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:56:42.375784 containerd[1560]: time="2025-12-12T18:56:42.375751980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:56:42.375833 containerd[1560]: time="2025-12-12T18:56:42.375825160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:56:42.376016 kubelet[2723]: E1212 18:56:42.375979 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:56:42.377179 kubelet[2723]: E1212 18:56:42.376025 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:56:42.377179 kubelet[2723]: E1212 18:56:42.376094 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86b45df6f8-cmnpq_calico-system(a91e52ae-48ed-4331-916f-65e4537bb807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:56:42.379616 containerd[1560]: time="2025-12-12T18:56:42.379581653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:56:42.499062 kubelet[2723]: I1212 18:56:42.498802 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e" path="/var/lib/kubelet/pods/4a476f3c-7ec6-4ae1-8d1f-4e0cf5a9a78e/volumes" Dec 12 18:56:42.526990 containerd[1560]: time="2025-12-12T18:56:42.526832733Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:56:42.528083 containerd[1560]: time="2025-12-12T18:56:42.527695774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:56:42.528083 containerd[1560]: time="2025-12-12T18:56:42.527769224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:56:42.528416 kubelet[2723]: E1212 18:56:42.528345 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:56:42.528778 kubelet[2723]: E1212 18:56:42.528726 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:56:42.529528 kubelet[2723]: E1212 18:56:42.529190 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86b45df6f8-cmnpq_calico-system(a91e52ae-48ed-4331-916f-65e4537bb807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:56:42.529528 kubelet[2723]: E1212 18:56:42.529421 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:56:42.621007 kubelet[2723]: E1212 18:56:42.620974 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:42.623494 kubelet[2723]: E1212 18:56:42.622397 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:56:43.624960 kubelet[2723]: E1212 18:56:43.624919 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:43.629963 kubelet[2723]: E1212 18:56:43.628182 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:56:43.809016 systemd-networkd[1448]: calia27235d4984: Gained IPv6LL Dec 12 18:56:47.495117 kubelet[2723]: E1212 18:56:47.495074 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:47.496159 containerd[1560]: time="2025-12-12T18:56:47.495905864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2gjxl,Uid:831d14fd-b949-458a-b2cb-c437fcbbb619,Namespace:kube-system,Attempt:0,}" Dec 12 18:56:47.497619 containerd[1560]: time="2025-12-12T18:56:47.497320651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2wmh6,Uid:7a1fbc12-082d-4cf2-b63a-aaa492c3ca96,Namespace:calico-system,Attempt:0,}" Dec 12 18:56:47.626820 systemd-networkd[1448]: cali7f183a82d46: Link UP Dec 12 18:56:47.627350 systemd-networkd[1448]: cali7f183a82d46: Gained carrier Dec 12 18:56:47.640383 containerd[1560]: 2025-12-12 18:56:47.542 [INFO][4119] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:56:47.640383 containerd[1560]: 2025-12-12 18:56:47.558 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0 goldmane-7c778bb748- calico-system 7a1fbc12-082d-4cf2-b63a-aaa492c3ca96 870 0 2025-12-12 18:56:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-237-134-203 goldmane-7c778bb748-2wmh6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7f183a82d46 [] [] }} ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Namespace="calico-system" Pod="goldmane-7c778bb748-2wmh6" WorkloadEndpoint="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-" Dec 12 18:56:47.640383 containerd[1560]: 2025-12-12 18:56:47.558 [INFO][4119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Namespace="calico-system" Pod="goldmane-7c778bb748-2wmh6" WorkloadEndpoint="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0" Dec 12 18:56:47.640383 containerd[1560]: 2025-12-12 18:56:47.588 [INFO][4139] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" HandleID="k8s-pod-network.53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Workload="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0" Dec 12 18:56:47.640592 containerd[1560]: 2025-12-12 18:56:47.588 [INFO][4139] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" HandleID="k8s-pod-network.53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Workload="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5680), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-134-203", "pod":"goldmane-7c778bb748-2wmh6", "timestamp":"2025-12-12 18:56:47.588216555 +0000 UTC"}, Hostname:"172-237-134-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:56:47.640592 containerd[1560]: 2025-12-12 18:56:47.588 [INFO][4139] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:56:47.640592 containerd[1560]: 2025-12-12 18:56:47.588 [INFO][4139] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:56:47.640592 containerd[1560]: 2025-12-12 18:56:47.588 [INFO][4139] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-203' Dec 12 18:56:47.640592 containerd[1560]: 2025-12-12 18:56:47.594 [INFO][4139] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" host="172-237-134-203" Dec 12 18:56:47.640592 containerd[1560]: 2025-12-12 18:56:47.600 [INFO][4139] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-203" Dec 12 18:56:47.640592 containerd[1560]: 2025-12-12 18:56:47.603 [INFO][4139] ipam/ipam.go 511: Trying affinity for 192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:47.640592 containerd[1560]: 2025-12-12 18:56:47.605 [INFO][4139] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:47.640592 containerd[1560]: 2025-12-12 18:56:47.607 [INFO][4139] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:47.640778 containerd[1560]: 2025-12-12 18:56:47.607 [INFO][4139] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" host="172-237-134-203" Dec 12 18:56:47.640778 containerd[1560]: 2025-12-12 18:56:47.609 [INFO][4139] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a Dec 12 18:56:47.640778 containerd[1560]: 2025-12-12 18:56:47.612 [INFO][4139] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" host="172-237-134-203" Dec 12 18:56:47.640778 containerd[1560]: 2025-12-12 18:56:47.617 [INFO][4139] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.194/26] block=192.168.73.192/26 handle="k8s-pod-network.53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" host="172-237-134-203" Dec 12 18:56:47.640778 containerd[1560]: 2025-12-12 18:56:47.617 [INFO][4139] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.194/26] handle="k8s-pod-network.53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" host="172-237-134-203" Dec 12 18:56:47.640778 containerd[1560]: 2025-12-12 18:56:47.617 [INFO][4139] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:56:47.640778 containerd[1560]: 2025-12-12 18:56:47.617 [INFO][4139] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.194/26] IPv6=[] ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" HandleID="k8s-pod-network.53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Workload="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0" Dec 12 18:56:47.641111 containerd[1560]: 2025-12-12 18:56:47.621 [INFO][4119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Namespace="calico-system" Pod="goldmane-7c778bb748-2wmh6" WorkloadEndpoint="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7a1fbc12-082d-4cf2-b63a-aaa492c3ca96", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"", Pod:"goldmane-7c778bb748-2wmh6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7f183a82d46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:47.641111 containerd[1560]: 2025-12-12 18:56:47.621 [INFO][4119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.194/32] ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Namespace="calico-system" Pod="goldmane-7c778bb748-2wmh6" WorkloadEndpoint="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0" Dec 12 18:56:47.641292 containerd[1560]: 2025-12-12 18:56:47.621 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f183a82d46 ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Namespace="calico-system" Pod="goldmane-7c778bb748-2wmh6" WorkloadEndpoint="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0" Dec 12 18:56:47.641292 containerd[1560]: 2025-12-12 18:56:47.626 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Namespace="calico-system" Pod="goldmane-7c778bb748-2wmh6" WorkloadEndpoint="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0" Dec 12 18:56:47.641395 containerd[1560]: 2025-12-12 18:56:47.626 [INFO][4119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Namespace="calico-system" Pod="goldmane-7c778bb748-2wmh6" WorkloadEndpoint="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7a1fbc12-082d-4cf2-b63a-aaa492c3ca96", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a", Pod:"goldmane-7c778bb748-2wmh6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7f183a82d46", MAC:"36:02:fa:a8:ae:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:47.641590 containerd[1560]: 2025-12-12 18:56:47.637 [INFO][4119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" Namespace="calico-system" Pod="goldmane-7c778bb748-2wmh6" WorkloadEndpoint="172--237--134--203-k8s-goldmane--7c778bb748--2wmh6-eth0" Dec 12 18:56:47.667306 containerd[1560]: time="2025-12-12T18:56:47.667251839Z" level=info msg="connecting to shim 53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a" address="unix:///run/containerd/s/4bfcdc1aac5b0adb28ecdb4db70472aba9fd6f3af44d7e6c29b31c0e4cd1b066" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:47.698584 systemd[1]: Started cri-containerd-53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a.scope - libcontainer container 53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a. Dec 12 18:56:47.745425 systemd-networkd[1448]: calic008ebb0e58: Link UP Dec 12 18:56:47.747603 systemd-networkd[1448]: calic008ebb0e58: Gained carrier Dec 12 18:56:47.765127 containerd[1560]: 2025-12-12 18:56:47.540 [INFO][4114] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:56:47.765127 containerd[1560]: 2025-12-12 18:56:47.557 [INFO][4114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0 coredns-66bc5c9577- kube-system 831d14fd-b949-458a-b2cb-c437fcbbb619 869 0 2025-12-12 18:56:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-134-203 coredns-66bc5c9577-2gjxl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic008ebb0e58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Namespace="kube-system" Pod="coredns-66bc5c9577-2gjxl" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-" Dec 12 18:56:47.765127 containerd[1560]: 2025-12-12 18:56:47.557 [INFO][4114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Namespace="kube-system" Pod="coredns-66bc5c9577-2gjxl" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0" Dec 12 18:56:47.765127 containerd[1560]: 2025-12-12 18:56:47.591 [INFO][4137] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" HandleID="k8s-pod-network.e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Workload="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0" Dec 12 18:56:47.765358 containerd[1560]: 2025-12-12 18:56:47.591 [INFO][4137] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" HandleID="k8s-pod-network.e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Workload="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-134-203", "pod":"coredns-66bc5c9577-2gjxl", "timestamp":"2025-12-12 18:56:47.59155853 +0000 UTC"}, Hostname:"172-237-134-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:56:47.765358 containerd[1560]: 2025-12-12 18:56:47.591 [INFO][4137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:56:47.765358 containerd[1560]: 2025-12-12 18:56:47.617 [INFO][4137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:56:47.765358 containerd[1560]: 2025-12-12 18:56:47.617 [INFO][4137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-203' Dec 12 18:56:47.765358 containerd[1560]: 2025-12-12 18:56:47.696 [INFO][4137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" host="172-237-134-203" Dec 12 18:56:47.765358 containerd[1560]: 2025-12-12 18:56:47.704 [INFO][4137] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-203" Dec 12 18:56:47.765358 containerd[1560]: 2025-12-12 18:56:47.708 [INFO][4137] ipam/ipam.go 511: Trying affinity for 192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:47.765358 containerd[1560]: 2025-12-12 18:56:47.713 [INFO][4137] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:47.765358 containerd[1560]: 2025-12-12 18:56:47.717 [INFO][4137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:47.765358 containerd[1560]: 2025-12-12 18:56:47.719 [INFO][4137] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" host="172-237-134-203" Dec 12 18:56:47.765639 containerd[1560]: 2025-12-12 18:56:47.721 [INFO][4137] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7 Dec 12 18:56:47.765639 containerd[1560]: 2025-12-12 18:56:47.728 [INFO][4137] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" host="172-237-134-203" Dec 12 18:56:47.765639 containerd[1560]: 2025-12-12 18:56:47.734 [INFO][4137] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.195/26] block=192.168.73.192/26 handle="k8s-pod-network.e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" host="172-237-134-203" Dec 12 18:56:47.765639 containerd[1560]: 2025-12-12 18:56:47.734 [INFO][4137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.195/26] handle="k8s-pod-network.e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" host="172-237-134-203" Dec 12 18:56:47.765639 containerd[1560]: 2025-12-12 18:56:47.734 [INFO][4137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:56:47.765639 containerd[1560]: 2025-12-12 18:56:47.735 [INFO][4137] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.195/26] IPv6=[] ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" HandleID="k8s-pod-network.e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Workload="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0" Dec 12 18:56:47.765792 containerd[1560]: 2025-12-12 18:56:47.740 [INFO][4114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Namespace="kube-system" Pod="coredns-66bc5c9577-2gjxl" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"831d14fd-b949-458a-b2cb-c437fcbbb619", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"", Pod:"coredns-66bc5c9577-2gjxl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic008ebb0e58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:47.765792 containerd[1560]: 2025-12-12 18:56:47.741 [INFO][4114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.195/32] ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Namespace="kube-system" Pod="coredns-66bc5c9577-2gjxl" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0" Dec 12 18:56:47.765792 containerd[1560]: 2025-12-12 18:56:47.741 [INFO][4114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic008ebb0e58 ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Namespace="kube-system" Pod="coredns-66bc5c9577-2gjxl" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0" Dec 12 18:56:47.765792 containerd[1560]: 2025-12-12 18:56:47.748 [INFO][4114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Namespace="kube-system" Pod="coredns-66bc5c9577-2gjxl" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0" Dec 12 18:56:47.765792 containerd[1560]: 2025-12-12 18:56:47.749 [INFO][4114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Namespace="kube-system" Pod="coredns-66bc5c9577-2gjxl" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"831d14fd-b949-458a-b2cb-c437fcbbb619", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7", Pod:"coredns-66bc5c9577-2gjxl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic008ebb0e58", MAC:"0e:41:fd:ea:a4:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:47.765792 containerd[1560]: 2025-12-12 18:56:47.758 [INFO][4114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" Namespace="kube-system" Pod="coredns-66bc5c9577-2gjxl" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--2gjxl-eth0" Dec 12 18:56:47.798532 containerd[1560]: time="2025-12-12T18:56:47.798434160Z" level=info msg="connecting to shim e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7" address="unix:///run/containerd/s/09ef51c837955f37e35ffd2ea58463a9b8926cb39a552b8ed0c91ce98d05ab03" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:47.830756 systemd[1]: Started cri-containerd-e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7.scope - libcontainer container e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7. Dec 12 18:56:47.860387 containerd[1560]: time="2025-12-12T18:56:47.860331230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2wmh6,Uid:7a1fbc12-082d-4cf2-b63a-aaa492c3ca96,Namespace:calico-system,Attempt:0,} returns sandbox id \"53e3b5ec78bbf81072172068d4dd200df98276170bba68da86c034b0e055f74a\"" Dec 12 18:56:47.863413 containerd[1560]: time="2025-12-12T18:56:47.863378260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:56:47.903574 containerd[1560]: time="2025-12-12T18:56:47.903433110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2gjxl,Uid:831d14fd-b949-458a-b2cb-c437fcbbb619,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7\"" Dec 12 18:56:47.904331 kubelet[2723]: E1212 18:56:47.904194 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:47.907880 containerd[1560]: time="2025-12-12T18:56:47.907763992Z" level=info msg="CreateContainer within sandbox \"e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:56:47.917322 containerd[1560]: time="2025-12-12T18:56:47.917278125Z" level=info msg="Container 69100ac3512c23b9d9e4b1de05de94e6357afaa22880b86f6d3c230487a885d2: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:47.921228 containerd[1560]: time="2025-12-12T18:56:47.921182556Z" level=info msg="CreateContainer within sandbox \"e1ad97bda00a13a2146aee6877ba771cf3eb180201f6dfc689fb34639dbc95d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69100ac3512c23b9d9e4b1de05de94e6357afaa22880b86f6d3c230487a885d2\"" Dec 12 18:56:47.921808 containerd[1560]: time="2025-12-12T18:56:47.921775396Z" level=info msg="StartContainer for \"69100ac3512c23b9d9e4b1de05de94e6357afaa22880b86f6d3c230487a885d2\"" Dec 12 18:56:47.922868 containerd[1560]: time="2025-12-12T18:56:47.922813869Z" level=info msg="connecting to shim 69100ac3512c23b9d9e4b1de05de94e6357afaa22880b86f6d3c230487a885d2" address="unix:///run/containerd/s/09ef51c837955f37e35ffd2ea58463a9b8926cb39a552b8ed0c91ce98d05ab03" protocol=ttrpc version=3 Dec 12 18:56:47.940608 systemd[1]: Started cri-containerd-69100ac3512c23b9d9e4b1de05de94e6357afaa22880b86f6d3c230487a885d2.scope - libcontainer container 69100ac3512c23b9d9e4b1de05de94e6357afaa22880b86f6d3c230487a885d2. Dec 12 18:56:47.985531 containerd[1560]: time="2025-12-12T18:56:47.985486430Z" level=info msg="StartContainer for \"69100ac3512c23b9d9e4b1de05de94e6357afaa22880b86f6d3c230487a885d2\" returns successfully" Dec 12 18:56:48.007330 containerd[1560]: time="2025-12-12T18:56:48.007200734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:56:48.008617 containerd[1560]: time="2025-12-12T18:56:48.008546819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:56:48.008681 containerd[1560]: time="2025-12-12T18:56:48.008589164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:56:48.009219 kubelet[2723]: E1212 18:56:48.009182 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:56:48.009275 kubelet[2723]: E1212 18:56:48.009225 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:56:48.009310 kubelet[2723]: E1212 18:56:48.009283 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2wmh6_calico-system(7a1fbc12-082d-4cf2-b63a-aaa492c3ca96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:56:48.009337 kubelet[2723]: E1212 18:56:48.009313 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:56:48.496039 kubelet[2723]: E1212 18:56:48.495909 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:48.497543 containerd[1560]: time="2025-12-12T18:56:48.497046708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6nlkm,Uid:4791cb7e-5882-4565-8051-3b19c6f35a2b,Namespace:kube-system,Attempt:0,}" Dec 12 18:56:48.601607 systemd-networkd[1448]: cali309d29e9195: Link UP Dec 12 18:56:48.602803 systemd-networkd[1448]: cali309d29e9195: Gained carrier Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.527 [INFO][4314] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.539 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0 coredns-66bc5c9577- kube-system 4791cb7e-5882-4565-8051-3b19c6f35a2b 861 0 2025-12-12 18:56:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-134-203 coredns-66bc5c9577-6nlkm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali309d29e9195 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Namespace="kube-system" Pod="coredns-66bc5c9577-6nlkm" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.539 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Namespace="kube-system" Pod="coredns-66bc5c9577-6nlkm" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.567 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" HandleID="k8s-pod-network.680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Workload="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.567 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" HandleID="k8s-pod-network.680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Workload="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-134-203", "pod":"coredns-66bc5c9577-6nlkm", "timestamp":"2025-12-12 18:56:48.567758842 +0000 UTC"}, Hostname:"172-237-134-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.567 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.568 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.568 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-203' Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.573 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" host="172-237-134-203" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.577 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-203" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.580 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.582 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.584 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.584 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" host="172-237-134-203" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.585 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.588 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" host="172-237-134-203" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.594 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.196/26] block=192.168.73.192/26 handle="k8s-pod-network.680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" host="172-237-134-203" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.594 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.196/26] handle="k8s-pod-network.680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" host="172-237-134-203" Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.594 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:56:48.622634 containerd[1560]: 2025-12-12 18:56:48.594 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.196/26] IPv6=[] ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" HandleID="k8s-pod-network.680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Workload="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0" Dec 12 18:56:48.623675 containerd[1560]: 2025-12-12 18:56:48.597 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Namespace="kube-system" Pod="coredns-66bc5c9577-6nlkm" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4791cb7e-5882-4565-8051-3b19c6f35a2b", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"", Pod:"coredns-66bc5c9577-6nlkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali309d29e9195", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:48.623675 containerd[1560]: 2025-12-12 18:56:48.598 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.196/32] ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Namespace="kube-system" Pod="coredns-66bc5c9577-6nlkm" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0" Dec 12 18:56:48.623675 containerd[1560]: 2025-12-12 18:56:48.598 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali309d29e9195 ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Namespace="kube-system" Pod="coredns-66bc5c9577-6nlkm" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0" Dec 12 18:56:48.623675 containerd[1560]: 2025-12-12 18:56:48.604 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Namespace="kube-system" Pod="coredns-66bc5c9577-6nlkm" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0" Dec 12 18:56:48.623675 containerd[1560]: 2025-12-12 18:56:48.606 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Namespace="kube-system" Pod="coredns-66bc5c9577-6nlkm" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4791cb7e-5882-4565-8051-3b19c6f35a2b", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b", Pod:"coredns-66bc5c9577-6nlkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali309d29e9195", MAC:"ce:75:04:8b:ae:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:48.623675 containerd[1560]: 2025-12-12 18:56:48.617 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" Namespace="kube-system" Pod="coredns-66bc5c9577-6nlkm" WorkloadEndpoint="172--237--134--203-k8s-coredns--66bc5c9577--6nlkm-eth0" Dec 12 18:56:48.642280 kubelet[2723]: E1212 18:56:48.641992 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:48.646832 kubelet[2723]: E1212 18:56:48.646763 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:56:48.651266 containerd[1560]: time="2025-12-12T18:56:48.651181594Z" level=info msg="connecting to shim 680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b" address="unix:///run/containerd/s/a688f46c0da4e88f558d14d7bde2a8b41ae3aa3965b8424d1c21f6d65e40c783" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:48.685711 kubelet[2723]: I1212 18:56:48.685632 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2gjxl" podStartSLOduration=31.685617081 podStartE2EDuration="31.685617081s" podCreationTimestamp="2025-12-12 18:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:56:48.662453286 +0000 UTC m=+38.259555694" watchObservedRunningTime="2025-12-12 18:56:48.685617081 +0000 UTC m=+38.282719489" Dec 12 18:56:48.708895 systemd[1]: Started cri-containerd-680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b.scope - libcontainer container 680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b. Dec 12 18:56:48.768505 containerd[1560]: time="2025-12-12T18:56:48.768396368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6nlkm,Uid:4791cb7e-5882-4565-8051-3b19c6f35a2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b\"" Dec 12 18:56:48.770049 kubelet[2723]: E1212 18:56:48.770025 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:48.775005 containerd[1560]: time="2025-12-12T18:56:48.774965941Z" level=info msg="CreateContainer within sandbox \"680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:56:48.787435 containerd[1560]: time="2025-12-12T18:56:48.786631298Z" level=info msg="Container 588b04e06085057be71f22d29cd35306491bea302f4f69464462f47c0b50174a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:56:48.790829 containerd[1560]: time="2025-12-12T18:56:48.790791525Z" level=info msg="CreateContainer within sandbox \"680d4c88117bc44f892b0326fb75781f2a95a89d64d4f113d9de3d4281f91d3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"588b04e06085057be71f22d29cd35306491bea302f4f69464462f47c0b50174a\"" Dec 12 18:56:48.791713 containerd[1560]: time="2025-12-12T18:56:48.791692999Z" level=info msg="StartContainer for \"588b04e06085057be71f22d29cd35306491bea302f4f69464462f47c0b50174a\"" Dec 12 18:56:48.794330 containerd[1560]: time="2025-12-12T18:56:48.794266083Z" level=info msg="connecting to shim 588b04e06085057be71f22d29cd35306491bea302f4f69464462f47c0b50174a" address="unix:///run/containerd/s/a688f46c0da4e88f558d14d7bde2a8b41ae3aa3965b8424d1c21f6d65e40c783" protocol=ttrpc version=3 Dec 12 18:56:48.817605 systemd[1]: Started cri-containerd-588b04e06085057be71f22d29cd35306491bea302f4f69464462f47c0b50174a.scope - libcontainer container 588b04e06085057be71f22d29cd35306491bea302f4f69464462f47c0b50174a. Dec 12 18:56:48.857179 containerd[1560]: time="2025-12-12T18:56:48.857131139Z" level=info msg="StartContainer for \"588b04e06085057be71f22d29cd35306491bea302f4f69464462f47c0b50174a\" returns successfully" Dec 12 18:56:49.312753 systemd-networkd[1448]: cali7f183a82d46: Gained IPv6LL Dec 12 18:56:49.647675 kubelet[2723]: E1212 18:56:49.647570 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:49.648727 kubelet[2723]: E1212 18:56:49.648105 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:49.649083 kubelet[2723]: E1212 18:56:49.649028 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:56:49.657286 kubelet[2723]: I1212 18:56:49.657169 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6nlkm" podStartSLOduration=32.657157992 podStartE2EDuration="32.657157992s" podCreationTimestamp="2025-12-12 18:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:56:49.65642386 +0000 UTC m=+39.253526268" watchObservedRunningTime="2025-12-12 18:56:49.657157992 +0000 UTC m=+39.254260420" Dec 12 18:56:49.696648 systemd-networkd[1448]: calic008ebb0e58: Gained IPv6LL Dec 12 18:56:50.080724 systemd-networkd[1448]: cali309d29e9195: Gained IPv6LL Dec 12 18:56:50.497778 containerd[1560]: time="2025-12-12T18:56:50.497627712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684d7d59f5-x5wzd,Uid:b9c97883-cc24-4c44-982c-86a4cdeab0b3,Namespace:calico-system,Attempt:0,}" Dec 12 18:56:50.505815 containerd[1560]: time="2025-12-12T18:56:50.505378651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568b9b9d99-flgkd,Uid:c6117e7e-1835-4bc6-967b-fc9429542c7a,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:56:50.621411 systemd-networkd[1448]: cali4ffac26903c: Link UP Dec 12 18:56:50.622543 systemd-networkd[1448]: cali4ffac26903c: Gained carrier Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.535 [INFO][4463] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.545 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0 calico-kube-controllers-684d7d59f5- calico-system b9c97883-cc24-4c44-982c-86a4cdeab0b3 872 0 2025-12-12 18:56:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:684d7d59f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-237-134-203 calico-kube-controllers-684d7d59f5-x5wzd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4ffac26903c [] [] }} ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Namespace="calico-system" Pod="calico-kube-controllers-684d7d59f5-x5wzd" WorkloadEndpoint="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.545 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Namespace="calico-system" Pod="calico-kube-controllers-684d7d59f5-x5wzd" WorkloadEndpoint="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.587 [INFO][4490] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" HandleID="k8s-pod-network.7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Workload="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.587 [INFO][4490] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" HandleID="k8s-pod-network.7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Workload="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-134-203", "pod":"calico-kube-controllers-684d7d59f5-x5wzd", "timestamp":"2025-12-12 18:56:50.587331176 +0000 UTC"}, Hostname:"172-237-134-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.587 [INFO][4490] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.587 [INFO][4490] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.587 [INFO][4490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-203' Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.594 [INFO][4490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" host="172-237-134-203" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.597 [INFO][4490] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-203" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.602 [INFO][4490] ipam/ipam.go 511: Trying affinity for 192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.603 [INFO][4490] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.605 [INFO][4490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.605 [INFO][4490] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" host="172-237-134-203" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.606 [INFO][4490] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262 Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.609 [INFO][4490] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" host="172-237-134-203" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.613 [INFO][4490] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.197/26] block=192.168.73.192/26 handle="k8s-pod-network.7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" host="172-237-134-203" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.613 [INFO][4490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.197/26] handle="k8s-pod-network.7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" host="172-237-134-203" Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.613 [INFO][4490] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:56:50.634106 containerd[1560]: 2025-12-12 18:56:50.613 [INFO][4490] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.197/26] IPv6=[] ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" HandleID="k8s-pod-network.7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Workload="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0" Dec 12 18:56:50.634827 containerd[1560]: 2025-12-12 18:56:50.617 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Namespace="calico-system" Pod="calico-kube-controllers-684d7d59f5-x5wzd" WorkloadEndpoint="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0", GenerateName:"calico-kube-controllers-684d7d59f5-", Namespace:"calico-system", SelfLink:"", UID:"b9c97883-cc24-4c44-982c-86a4cdeab0b3", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684d7d59f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"", Pod:"calico-kube-controllers-684d7d59f5-x5wzd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ffac26903c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:50.634827 containerd[1560]: 2025-12-12 18:56:50.617 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.197/32] ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Namespace="calico-system" Pod="calico-kube-controllers-684d7d59f5-x5wzd" WorkloadEndpoint="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0" Dec 12 18:56:50.634827 containerd[1560]: 2025-12-12 18:56:50.617 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ffac26903c ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Namespace="calico-system" Pod="calico-kube-controllers-684d7d59f5-x5wzd" WorkloadEndpoint="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0" Dec 12 18:56:50.634827 containerd[1560]: 2025-12-12 18:56:50.623 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Namespace="calico-system" Pod="calico-kube-controllers-684d7d59f5-x5wzd" WorkloadEndpoint="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0" Dec 12 18:56:50.634827 containerd[1560]: 2025-12-12 18:56:50.623 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Namespace="calico-system" Pod="calico-kube-controllers-684d7d59f5-x5wzd" WorkloadEndpoint="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0", GenerateName:"calico-kube-controllers-684d7d59f5-", Namespace:"calico-system", SelfLink:"", UID:"b9c97883-cc24-4c44-982c-86a4cdeab0b3", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"684d7d59f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262", Pod:"calico-kube-controllers-684d7d59f5-x5wzd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ffac26903c", MAC:"56:1d:22:ae:2f:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:50.634827 containerd[1560]: 2025-12-12 18:56:50.631 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" Namespace="calico-system" Pod="calico-kube-controllers-684d7d59f5-x5wzd" WorkloadEndpoint="172--237--134--203-k8s-calico--kube--controllers--684d7d59f5--x5wzd-eth0" Dec 12 18:56:50.654276 kubelet[2723]: E1212 18:56:50.654168 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:50.655442 kubelet[2723]: E1212 18:56:50.654933 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:50.662226 containerd[1560]: time="2025-12-12T18:56:50.661906132Z" level=info msg="connecting to shim 7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262" address="unix:///run/containerd/s/4dc89c8708523a2d06db54ef69fab6376bcee8e57574ea62fccb39d8aa0c2407" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:50.701629 systemd[1]: Started cri-containerd-7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262.scope - libcontainer container 7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262. Dec 12 18:56:50.733867 systemd-networkd[1448]: calib4a621f162c: Link UP Dec 12 18:56:50.735867 systemd-networkd[1448]: calib4a621f162c: Gained carrier Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.543 [INFO][4473] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.555 [INFO][4473] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0 calico-apiserver-568b9b9d99- calico-apiserver c6117e7e-1835-4bc6-967b-fc9429542c7a 867 0 2025-12-12 18:56:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:568b9b9d99 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-134-203 calico-apiserver-568b9b9d99-flgkd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4a621f162c [] [] }} ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-flgkd" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.555 [INFO][4473] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-flgkd" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.589 [INFO][4492] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" HandleID="k8s-pod-network.74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Workload="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.589 [INFO][4492] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" HandleID="k8s-pod-network.74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Workload="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb5b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-134-203", "pod":"calico-apiserver-568b9b9d99-flgkd", "timestamp":"2025-12-12 18:56:50.589283068 +0000 UTC"}, Hostname:"172-237-134-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.589 [INFO][4492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.614 [INFO][4492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.614 [INFO][4492] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-203' Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.697 [INFO][4492] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" host="172-237-134-203" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.703 [INFO][4492] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-203" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.707 [INFO][4492] ipam/ipam.go 511: Trying affinity for 192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.709 [INFO][4492] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.713 [INFO][4492] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.713 [INFO][4492] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" host="172-237-134-203" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.714 [INFO][4492] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4 Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.720 [INFO][4492] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" host="172-237-134-203" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.726 [INFO][4492] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.198/26] block=192.168.73.192/26 handle="k8s-pod-network.74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" host="172-237-134-203" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.726 [INFO][4492] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.198/26] handle="k8s-pod-network.74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" host="172-237-134-203" Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.726 [INFO][4492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:56:50.748306 containerd[1560]: 2025-12-12 18:56:50.726 [INFO][4492] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.198/26] IPv6=[] ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" HandleID="k8s-pod-network.74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Workload="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0" Dec 12 18:56:50.749345 containerd[1560]: 2025-12-12 18:56:50.730 [INFO][4473] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-flgkd" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0", GenerateName:"calico-apiserver-568b9b9d99-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6117e7e-1835-4bc6-967b-fc9429542c7a", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568b9b9d99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"", Pod:"calico-apiserver-568b9b9d99-flgkd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4a621f162c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:50.749345 containerd[1560]: 2025-12-12 18:56:50.730 [INFO][4473] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.198/32] ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-flgkd" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0" Dec 12 18:56:50.749345 containerd[1560]: 2025-12-12 18:56:50.730 [INFO][4473] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4a621f162c ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-flgkd" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0" Dec 12 18:56:50.749345 containerd[1560]: 2025-12-12 18:56:50.736 [INFO][4473] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-flgkd" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0" Dec 12 18:56:50.749345 containerd[1560]: 2025-12-12 18:56:50.736 [INFO][4473] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-flgkd" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0", GenerateName:"calico-apiserver-568b9b9d99-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6117e7e-1835-4bc6-967b-fc9429542c7a", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568b9b9d99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4", Pod:"calico-apiserver-568b9b9d99-flgkd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4a621f162c", MAC:"fa:8e:6c:98:42:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:50.749345 containerd[1560]: 2025-12-12 18:56:50.745 [INFO][4473] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-flgkd" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--flgkd-eth0" Dec 12 18:56:50.776486 containerd[1560]: time="2025-12-12T18:56:50.776433365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-684d7d59f5-x5wzd,Uid:b9c97883-cc24-4c44-982c-86a4cdeab0b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ddbdaa55d26dff200fe74efc9d0cb353268a03e1425cd23ba47a971d05bc262\"" Dec 12 18:56:50.779965 containerd[1560]: time="2025-12-12T18:56:50.779375674Z" level=info msg="connecting to shim 74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4" address="unix:///run/containerd/s/c89903909390f7992d7b818e5f2c5e537df03c76c67d05ec9d2ed81f4ca09e79" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:50.780476 containerd[1560]: time="2025-12-12T18:56:50.779600558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:56:50.803708 systemd[1]: Started cri-containerd-74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4.scope - libcontainer container 74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4. Dec 12 18:56:50.856627 containerd[1560]: time="2025-12-12T18:56:50.856596097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568b9b9d99-flgkd,Uid:c6117e7e-1835-4bc6-967b-fc9429542c7a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"74d21df8f9ca8d02d0f27af8693db721839ef2f88db8315ba429bded396574e4\"" Dec 12 18:56:50.919815 containerd[1560]: time="2025-12-12T18:56:50.919751967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:56:50.920605 containerd[1560]: time="2025-12-12T18:56:50.920567325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:56:50.920740 containerd[1560]: time="2025-12-12T18:56:50.920644223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:56:50.920837 kubelet[2723]: E1212 18:56:50.920797 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:56:50.920913 kubelet[2723]: E1212 18:56:50.920842 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:56:50.921011 kubelet[2723]: E1212 18:56:50.920988 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-684d7d59f5-x5wzd_calico-system(b9c97883-cc24-4c44-982c-86a4cdeab0b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:56:50.921089 kubelet[2723]: E1212 18:56:50.921025 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:56:50.921772 containerd[1560]: time="2025-12-12T18:56:50.921603647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:56:51.077502 containerd[1560]: time="2025-12-12T18:56:51.077372665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:56:51.078475 containerd[1560]: time="2025-12-12T18:56:51.078429987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:56:51.078572 containerd[1560]: time="2025-12-12T18:56:51.078528567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:56:51.078770 kubelet[2723]: E1212 18:56:51.078734 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:56:51.078841 kubelet[2723]: E1212 18:56:51.078780 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:56:51.079131 kubelet[2723]: E1212 18:56:51.078854 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-568b9b9d99-flgkd_calico-apiserver(c6117e7e-1835-4bc6-967b-fc9429542c7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:56:51.079131 kubelet[2723]: E1212 18:56:51.078886 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:56:51.496477 containerd[1560]: time="2025-12-12T18:56:51.496401414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568b9b9d99-4srkk,Uid:f4996cbf-b45a-424a-8397-b3ebce94b347,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:56:51.661487 kubelet[2723]: E1212 18:56:51.661404 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:56:51.666062 kubelet[2723]: E1212 18:56:51.666018 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:51.668976 kubelet[2723]: E1212 18:56:51.668754 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:56:51.692096 systemd-networkd[1448]: calid0e6567756b: Link UP Dec 12 18:56:51.693515 systemd-networkd[1448]: calid0e6567756b: Gained carrier Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.525 [INFO][4631] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.543 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0 calico-apiserver-568b9b9d99- calico-apiserver f4996cbf-b45a-424a-8397-b3ebce94b347 868 0 2025-12-12 18:56:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:568b9b9d99 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-134-203 calico-apiserver-568b9b9d99-4srkk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid0e6567756b [] [] }} ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-4srkk" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.543 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-4srkk" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.610 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" HandleID="k8s-pod-network.df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Workload="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.610 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" HandleID="k8s-pod-network.df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Workload="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f5d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-134-203", "pod":"calico-apiserver-568b9b9d99-4srkk", "timestamp":"2025-12-12 18:56:51.610307443 +0000 UTC"}, Hostname:"172-237-134-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.610 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.610 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.610 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-203' Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.624 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" host="172-237-134-203" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.631 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-203" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.635 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.637 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.646 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.647 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" host="172-237-134-203" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.653 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.660 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" host="172-237-134-203" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.675 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.199/26] block=192.168.73.192/26 handle="k8s-pod-network.df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" host="172-237-134-203" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.676 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.199/26] handle="k8s-pod-network.df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" host="172-237-134-203" Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.676 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:56:51.719569 containerd[1560]: 2025-12-12 18:56:51.676 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.199/26] IPv6=[] ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" HandleID="k8s-pod-network.df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Workload="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0" Dec 12 18:56:51.720725 containerd[1560]: 2025-12-12 18:56:51.681 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-4srkk" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0", GenerateName:"calico-apiserver-568b9b9d99-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4996cbf-b45a-424a-8397-b3ebce94b347", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568b9b9d99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"", Pod:"calico-apiserver-568b9b9d99-4srkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0e6567756b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:51.720725 containerd[1560]: 2025-12-12 18:56:51.682 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.199/32] ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-4srkk" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0" Dec 12 18:56:51.720725 containerd[1560]: 2025-12-12 18:56:51.682 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0e6567756b ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-4srkk" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0" Dec 12 18:56:51.720725 containerd[1560]: 2025-12-12 18:56:51.691 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-4srkk" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0" Dec 12 18:56:51.720725 containerd[1560]: 2025-12-12 18:56:51.693 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-4srkk" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0", GenerateName:"calico-apiserver-568b9b9d99-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4996cbf-b45a-424a-8397-b3ebce94b347", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568b9b9d99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c", Pod:"calico-apiserver-568b9b9d99-4srkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0e6567756b", MAC:"6e:53:74:b4:a3:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:51.720725 containerd[1560]: 2025-12-12 18:56:51.715 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" Namespace="calico-apiserver" Pod="calico-apiserver-568b9b9d99-4srkk" WorkloadEndpoint="172--237--134--203-k8s-calico--apiserver--568b9b9d99--4srkk-eth0" Dec 12 18:56:51.743483 containerd[1560]: time="2025-12-12T18:56:51.743420267Z" level=info msg="connecting to shim df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c" address="unix:///run/containerd/s/2e41e9096d059bfc290a7b0680e1d5f183802f8b7f15487d41ce0741bee6f0d2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:51.779606 systemd[1]: Started cri-containerd-df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c.scope - libcontainer container df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c. Dec 12 18:56:51.838030 containerd[1560]: time="2025-12-12T18:56:51.837916450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568b9b9d99-4srkk,Uid:f4996cbf-b45a-424a-8397-b3ebce94b347,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"df520c8c5a3d2fbf45019833f5125b72f88a17a37e30f9df7af681d25ff2ba9c\"" Dec 12 18:56:51.840410 containerd[1560]: time="2025-12-12T18:56:51.840160026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:56:51.992010 containerd[1560]: time="2025-12-12T18:56:51.991949579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:56:51.993488 containerd[1560]: time="2025-12-12T18:56:51.993390811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:56:51.993781 containerd[1560]: time="2025-12-12T18:56:51.993447577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:56:51.994140 kubelet[2723]: E1212 18:56:51.993900 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:56:51.994140 kubelet[2723]: E1212 18:56:51.993947 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:56:51.994140 kubelet[2723]: E1212 18:56:51.994028 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-568b9b9d99-4srkk_calico-apiserver(f4996cbf-b45a-424a-8397-b3ebce94b347): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:56:51.994140 kubelet[2723]: E1212 18:56:51.994061 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:56:52.256679 systemd-networkd[1448]: cali4ffac26903c: Gained IPv6LL Dec 12 18:56:52.384634 systemd-networkd[1448]: calib4a621f162c: Gained IPv6LL Dec 12 18:56:52.670931 kubelet[2723]: E1212 18:56:52.670887 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:56:52.671390 kubelet[2723]: E1212 18:56:52.671159 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:56:52.672668 kubelet[2723]: E1212 18:56:52.672140 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:56:53.024658 systemd-networkd[1448]: calid0e6567756b: Gained IPv6LL Dec 12 18:56:53.496487 containerd[1560]: time="2025-12-12T18:56:53.496420176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xcxxg,Uid:7adfcf36-f09b-4802-a329-cb264c08cc5c,Namespace:calico-system,Attempt:0,}" Dec 12 18:56:53.601242 systemd-networkd[1448]: calib2331fd1df4: Link UP Dec 12 18:56:53.602500 systemd-networkd[1448]: calib2331fd1df4: Gained carrier Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.528 [INFO][4748] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.538 [INFO][4748] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--203-k8s-csi--node--driver--xcxxg-eth0 csi-node-driver- calico-system 7adfcf36-f09b-4802-a329-cb264c08cc5c 768 0 2025-12-12 18:56:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-237-134-203 csi-node-driver-xcxxg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib2331fd1df4 [] [] }} ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Namespace="calico-system" Pod="csi-node-driver-xcxxg" WorkloadEndpoint="172--237--134--203-k8s-csi--node--driver--xcxxg-" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.538 [INFO][4748] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Namespace="calico-system" Pod="csi-node-driver-xcxxg" WorkloadEndpoint="172--237--134--203-k8s-csi--node--driver--xcxxg-eth0" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.561 [INFO][4760] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" HandleID="k8s-pod-network.6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Workload="172--237--134--203-k8s-csi--node--driver--xcxxg-eth0" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.561 [INFO][4760] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" HandleID="k8s-pod-network.6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Workload="172--237--134--203-k8s-csi--node--driver--xcxxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-134-203", "pod":"csi-node-driver-xcxxg", "timestamp":"2025-12-12 18:56:53.561579927 +0000 UTC"}, Hostname:"172-237-134-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.561 [INFO][4760] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.561 [INFO][4760] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.561 [INFO][4760] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-203' Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.567 [INFO][4760] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" host="172-237-134-203" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.570 [INFO][4760] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-203" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.579 [INFO][4760] ipam/ipam.go 511: Trying affinity for 192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.580 [INFO][4760] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.583 [INFO][4760] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.192/26 host="172-237-134-203" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.583 [INFO][4760] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.192/26 handle="k8s-pod-network.6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" host="172-237-134-203" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.584 [INFO][4760] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4 Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.587 [INFO][4760] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.192/26 handle="k8s-pod-network.6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" host="172-237-134-203" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.592 [INFO][4760] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.200/26] block=192.168.73.192/26 handle="k8s-pod-network.6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" host="172-237-134-203" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.592 [INFO][4760] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.200/26] handle="k8s-pod-network.6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" host="172-237-134-203" Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.593 [INFO][4760] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:56:53.618236 containerd[1560]: 2025-12-12 18:56:53.593 [INFO][4760] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.200/26] IPv6=[] ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" HandleID="k8s-pod-network.6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Workload="172--237--134--203-k8s-csi--node--driver--xcxxg-eth0" Dec 12 18:56:53.619001 containerd[1560]: 2025-12-12 18:56:53.595 [INFO][4748] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Namespace="calico-system" Pod="csi-node-driver-xcxxg" WorkloadEndpoint="172--237--134--203-k8s-csi--node--driver--xcxxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-csi--node--driver--xcxxg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7adfcf36-f09b-4802-a329-cb264c08cc5c", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"", Pod:"csi-node-driver-xcxxg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib2331fd1df4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:53.619001 containerd[1560]: 2025-12-12 18:56:53.595 [INFO][4748] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.200/32] ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Namespace="calico-system" Pod="csi-node-driver-xcxxg" WorkloadEndpoint="172--237--134--203-k8s-csi--node--driver--xcxxg-eth0" Dec 12 18:56:53.619001 containerd[1560]: 2025-12-12 18:56:53.595 [INFO][4748] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2331fd1df4 ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Namespace="calico-system" Pod="csi-node-driver-xcxxg" WorkloadEndpoint="172--237--134--203-k8s-csi--node--driver--xcxxg-eth0" Dec 12 18:56:53.619001 containerd[1560]: 2025-12-12 18:56:53.602 [INFO][4748] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Namespace="calico-system" Pod="csi-node-driver-xcxxg" WorkloadEndpoint="172--237--134--203-k8s-csi--node--driver--xcxxg-eth0" Dec 12 18:56:53.619001 containerd[1560]: 2025-12-12 18:56:53.603 [INFO][4748] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Namespace="calico-system" Pod="csi-node-driver-xcxxg" WorkloadEndpoint="172--237--134--203-k8s-csi--node--driver--xcxxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--203-k8s-csi--node--driver--xcxxg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7adfcf36-f09b-4802-a329-cb264c08cc5c", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 56, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-203", ContainerID:"6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4", Pod:"csi-node-driver-xcxxg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib2331fd1df4", MAC:"5e:18:21:4e:37:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:56:53.619001 containerd[1560]: 2025-12-12 18:56:53.613 [INFO][4748] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" Namespace="calico-system" Pod="csi-node-driver-xcxxg" WorkloadEndpoint="172--237--134--203-k8s-csi--node--driver--xcxxg-eth0" Dec 12 18:56:53.638525 containerd[1560]: time="2025-12-12T18:56:53.638450891Z" level=info msg="connecting to shim 6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4" address="unix:///run/containerd/s/ed1931c28916f7bcca223f00f1b6efd3029b6c2d7a5813748f0429fb3994944c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:56:53.674483 kubelet[2723]: E1212 18:56:53.674426 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:56:53.676781 systemd[1]: Started cri-containerd-6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4.scope - libcontainer container 6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4. Dec 12 18:56:53.719367 containerd[1560]: time="2025-12-12T18:56:53.719334337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xcxxg,Uid:7adfcf36-f09b-4802-a329-cb264c08cc5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a38683a16d1fc428a7ab8a2d52eef8faacb11661510c14a96e13316c2b460f4\"" Dec 12 18:56:53.721698 containerd[1560]: time="2025-12-12T18:56:53.721674902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:56:53.863425 containerd[1560]: time="2025-12-12T18:56:53.863302297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:56:53.864928 containerd[1560]: time="2025-12-12T18:56:53.864792446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:56:53.864928 containerd[1560]: time="2025-12-12T18:56:53.864827910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:56:53.865433 kubelet[2723]: E1212 18:56:53.865193 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:56:53.865433 kubelet[2723]: E1212 18:56:53.865234 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:56:53.865433 kubelet[2723]: E1212 18:56:53.865311 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:56:53.867110 containerd[1560]: time="2025-12-12T18:56:53.867072454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:56:54.001006 containerd[1560]: time="2025-12-12T18:56:54.000817229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:56:54.002019 containerd[1560]: time="2025-12-12T18:56:54.001908377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:56:54.002082 containerd[1560]: time="2025-12-12T18:56:54.002050151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:56:54.002537 kubelet[2723]: E1212 18:56:54.002427 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:56:54.002618 kubelet[2723]: E1212 18:56:54.002592 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:56:54.003537 kubelet[2723]: E1212 18:56:54.002792 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:56:54.003643 kubelet[2723]: E1212 18:56:54.003591 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:56:54.336772 kubelet[2723]: I1212 18:56:54.336718 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:56:54.337415 kubelet[2723]: E1212 18:56:54.337192 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:54.674498 kubelet[2723]: E1212 18:56:54.674271 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:56:54.676387 kubelet[2723]: E1212 18:56:54.676337 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:56:54.881748 systemd-networkd[1448]: calib2331fd1df4: Gained IPv6LL Dec 12 18:56:55.261525 systemd-networkd[1448]: vxlan.calico: Link UP Dec 12 18:56:55.261539 systemd-networkd[1448]: vxlan.calico: Gained carrier Dec 12 18:56:55.679064 kubelet[2723]: E1212 18:56:55.678873 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:56:56.352722 systemd-networkd[1448]: vxlan.calico: Gained IPv6LL Dec 12 18:56:58.497990 containerd[1560]: time="2025-12-12T18:56:58.497947439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:56:58.631696 containerd[1560]: time="2025-12-12T18:56:58.631653427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:56:58.632613 containerd[1560]: time="2025-12-12T18:56:58.632514084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:56:58.632613 containerd[1560]: time="2025-12-12T18:56:58.632582730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:56:58.634205 kubelet[2723]: E1212 18:56:58.634128 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:56:58.634205 kubelet[2723]: E1212 18:56:58.634199 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:56:58.634869 kubelet[2723]: E1212 18:56:58.634271 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86b45df6f8-cmnpq_calico-system(a91e52ae-48ed-4331-916f-65e4537bb807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:56:58.636014 containerd[1560]: time="2025-12-12T18:56:58.635986306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:56:58.766786 containerd[1560]: time="2025-12-12T18:56:58.766667821Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:56:58.767924 containerd[1560]: time="2025-12-12T18:56:58.767863028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:56:58.768030 containerd[1560]: time="2025-12-12T18:56:58.767877800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:56:58.768245 kubelet[2723]: E1212 18:56:58.768188 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:56:58.768245 kubelet[2723]: E1212 18:56:58.768240 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:56:58.768365 kubelet[2723]: E1212 18:56:58.768323 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86b45df6f8-cmnpq_calico-system(a91e52ae-48ed-4331-916f-65e4537bb807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:56:58.768444 kubelet[2723]: E1212 18:56:58.768374 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:57:03.495629 containerd[1560]: time="2025-12-12T18:57:03.495548165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:57:03.642234 containerd[1560]: time="2025-12-12T18:57:03.642158505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:03.643427 containerd[1560]: time="2025-12-12T18:57:03.643327961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:57:03.643427 containerd[1560]: time="2025-12-12T18:57:03.643433130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:57:03.643731 kubelet[2723]: E1212 18:57:03.643652 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:57:03.644145 kubelet[2723]: E1212 18:57:03.643729 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:57:03.644145 kubelet[2723]: E1212 18:57:03.643819 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-684d7d59f5-x5wzd_calico-system(b9c97883-cc24-4c44-982c-86a4cdeab0b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:03.644145 kubelet[2723]: E1212 18:57:03.643871 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:57:04.496569 containerd[1560]: time="2025-12-12T18:57:04.496052088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:57:04.640175 containerd[1560]: time="2025-12-12T18:57:04.640109069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:04.641069 containerd[1560]: time="2025-12-12T18:57:04.641024063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:57:04.641191 containerd[1560]: time="2025-12-12T18:57:04.641036954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:57:04.641402 kubelet[2723]: E1212 18:57:04.641342 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:57:04.641450 kubelet[2723]: E1212 18:57:04.641399 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:57:04.641653 kubelet[2723]: E1212 18:57:04.641604 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-568b9b9d99-4srkk_calico-apiserver(f4996cbf-b45a-424a-8397-b3ebce94b347): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:04.641712 kubelet[2723]: E1212 18:57:04.641672 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:57:04.642291 containerd[1560]: time="2025-12-12T18:57:04.642216010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:57:04.772910 containerd[1560]: time="2025-12-12T18:57:04.772769877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:04.773956 containerd[1560]: time="2025-12-12T18:57:04.773925901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:57:04.774015 containerd[1560]: time="2025-12-12T18:57:04.774000187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:57:04.774186 kubelet[2723]: E1212 18:57:04.774152 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:57:04.774517 kubelet[2723]: E1212 18:57:04.774193 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:57:04.774517 kubelet[2723]: E1212 18:57:04.774280 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-568b9b9d99-flgkd_calico-apiserver(c6117e7e-1835-4bc6-967b-fc9429542c7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:04.774517 kubelet[2723]: E1212 18:57:04.774323 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:57:05.495977 containerd[1560]: time="2025-12-12T18:57:05.495736648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:57:05.623347 containerd[1560]: time="2025-12-12T18:57:05.623204267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:05.624208 containerd[1560]: time="2025-12-12T18:57:05.624178765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:57:05.624647 containerd[1560]: time="2025-12-12T18:57:05.624183435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:57:05.624694 kubelet[2723]: E1212 18:57:05.624421 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:57:05.624694 kubelet[2723]: E1212 18:57:05.624487 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:57:05.624694 kubelet[2723]: E1212 18:57:05.624580 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2wmh6_calico-system(7a1fbc12-082d-4cf2-b63a-aaa492c3ca96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:05.624694 kubelet[2723]: E1212 18:57:05.624611 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:57:10.497369 containerd[1560]: time="2025-12-12T18:57:10.497263978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:57:10.499169 kubelet[2723]: E1212 18:57:10.498882 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:57:10.656756 containerd[1560]: time="2025-12-12T18:57:10.656693733Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:10.657765 containerd[1560]: time="2025-12-12T18:57:10.657738812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:57:10.657840 containerd[1560]: time="2025-12-12T18:57:10.657798106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:57:10.657976 kubelet[2723]: E1212 18:57:10.657944 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:57:10.658015 kubelet[2723]: E1212 18:57:10.657980 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:57:10.658536 kubelet[2723]: E1212 18:57:10.658077 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:10.659792 containerd[1560]: time="2025-12-12T18:57:10.659756023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:57:10.786895 containerd[1560]: time="2025-12-12T18:57:10.786778597Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:10.787834 containerd[1560]: time="2025-12-12T18:57:10.787784192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:57:10.787950 containerd[1560]: time="2025-12-12T18:57:10.787821815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:57:10.788084 kubelet[2723]: E1212 18:57:10.788052 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:57:10.788184 kubelet[2723]: E1212 18:57:10.788166 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:57:10.788297 kubelet[2723]: E1212 18:57:10.788279 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:10.788426 kubelet[2723]: E1212 18:57:10.788401 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:57:13.701655 kubelet[2723]: E1212 18:57:13.701544 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:57:16.499906 kubelet[2723]: E1212 18:57:16.499859 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:57:16.502054 kubelet[2723]: E1212 18:57:16.500278 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:57:17.495623 kubelet[2723]: E1212 18:57:17.494908 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:57:17.496629 kubelet[2723]: E1212 18:57:17.496544 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:57:21.498019 containerd[1560]: time="2025-12-12T18:57:21.496562005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:57:21.644555 containerd[1560]: time="2025-12-12T18:57:21.644488585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:21.645901 containerd[1560]: time="2025-12-12T18:57:21.645859713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:57:21.645950 containerd[1560]: time="2025-12-12T18:57:21.645926269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:57:21.646267 kubelet[2723]: E1212 18:57:21.646174 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:57:21.646625 kubelet[2723]: E1212 18:57:21.646275 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:57:21.646625 kubelet[2723]: E1212 18:57:21.646492 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86b45df6f8-cmnpq_calico-system(a91e52ae-48ed-4331-916f-65e4537bb807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:21.648251 containerd[1560]: time="2025-12-12T18:57:21.648226434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:57:21.781936 containerd[1560]: time="2025-12-12T18:57:21.781805336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:21.783922 containerd[1560]: time="2025-12-12T18:57:21.782857256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:57:21.783922 containerd[1560]: time="2025-12-12T18:57:21.782954469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:57:21.784009 kubelet[2723]: E1212 18:57:21.783629 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:57:21.784009 kubelet[2723]: E1212 18:57:21.783668 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:57:21.784009 kubelet[2723]: E1212 18:57:21.783731 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86b45df6f8-cmnpq_calico-system(a91e52ae-48ed-4331-916f-65e4537bb807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:21.784102 kubelet[2723]: E1212 18:57:21.783775 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:57:24.499278 kubelet[2723]: E1212 18:57:24.498751 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:57:27.496032 containerd[1560]: time="2025-12-12T18:57:27.495952475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:57:27.507549 kubelet[2723]: E1212 18:57:27.507114 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:57:27.621751 containerd[1560]: time="2025-12-12T18:57:27.621682396Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:27.622826 containerd[1560]: time="2025-12-12T18:57:27.622762535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:57:27.622909 containerd[1560]: time="2025-12-12T18:57:27.622844071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:57:27.623060 kubelet[2723]: E1212 18:57:27.623017 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:57:27.623107 kubelet[2723]: E1212 18:57:27.623078 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:57:27.623204 kubelet[2723]: E1212 18:57:27.623170 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-568b9b9d99-flgkd_calico-apiserver(c6117e7e-1835-4bc6-967b-fc9429542c7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:27.623559 kubelet[2723]: E1212 18:57:27.623250 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:57:28.498129 containerd[1560]: time="2025-12-12T18:57:28.497959123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:57:28.637774 containerd[1560]: time="2025-12-12T18:57:28.637707390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:28.638935 containerd[1560]: time="2025-12-12T18:57:28.638891157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:57:28.638993 containerd[1560]: time="2025-12-12T18:57:28.638904997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:57:28.639187 kubelet[2723]: E1212 18:57:28.639150 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:57:28.639553 kubelet[2723]: E1212 18:57:28.639194 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:57:28.639553 kubelet[2723]: E1212 18:57:28.639270 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-568b9b9d99-4srkk_calico-apiserver(f4996cbf-b45a-424a-8397-b3ebce94b347): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:28.639553 kubelet[2723]: E1212 18:57:28.639303 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:57:30.499443 containerd[1560]: time="2025-12-12T18:57:30.499245652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:57:30.645551 containerd[1560]: time="2025-12-12T18:57:30.645503606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:30.646397 containerd[1560]: time="2025-12-12T18:57:30.646344214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:57:30.646397 containerd[1560]: time="2025-12-12T18:57:30.646371942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:57:30.646706 kubelet[2723]: E1212 18:57:30.646572 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:57:30.646706 kubelet[2723]: E1212 18:57:30.646613 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:57:30.646706 kubelet[2723]: E1212 18:57:30.646691 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2wmh6_calico-system(7a1fbc12-082d-4cf2-b63a-aaa492c3ca96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:30.647337 kubelet[2723]: E1212 18:57:30.646721 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:57:31.495546 containerd[1560]: time="2025-12-12T18:57:31.495499378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:57:31.631118 containerd[1560]: time="2025-12-12T18:57:31.631077459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:31.631893 containerd[1560]: time="2025-12-12T18:57:31.631846291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:57:31.631893 containerd[1560]: time="2025-12-12T18:57:31.631870020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:57:31.632045 kubelet[2723]: E1212 18:57:31.632013 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:57:31.632109 kubelet[2723]: E1212 18:57:31.632054 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:57:31.632140 kubelet[2723]: E1212 18:57:31.632119 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-684d7d59f5-x5wzd_calico-system(b9c97883-cc24-4c44-982c-86a4cdeab0b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:31.632196 kubelet[2723]: E1212 18:57:31.632159 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:57:32.494493 kubelet[2723]: E1212 18:57:32.493866 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:57:33.498210 kubelet[2723]: E1212 18:57:33.498085 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:57:34.494079 kubelet[2723]: E1212 18:57:34.494039 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:57:37.497069 containerd[1560]: time="2025-12-12T18:57:37.497026457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:57:37.628357 containerd[1560]: time="2025-12-12T18:57:37.628311843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:37.629417 containerd[1560]: time="2025-12-12T18:57:37.629367290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:57:37.629516 containerd[1560]: time="2025-12-12T18:57:37.629480877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:57:37.629722 kubelet[2723]: E1212 18:57:37.629673 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:57:37.630793 kubelet[2723]: E1212 18:57:37.630109 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:57:37.630793 kubelet[2723]: E1212 18:57:37.630235 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:37.631715 containerd[1560]: time="2025-12-12T18:57:37.631694229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:57:37.769576 containerd[1560]: time="2025-12-12T18:57:37.769310246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:57:37.770374 containerd[1560]: time="2025-12-12T18:57:37.770164688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:57:37.770374 containerd[1560]: time="2025-12-12T18:57:37.770239406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:57:37.770810 kubelet[2723]: E1212 18:57:37.770776 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:57:37.770922 kubelet[2723]: E1212 18:57:37.770898 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:57:37.771138 kubelet[2723]: E1212 18:57:37.771084 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:57:37.771758 kubelet[2723]: E1212 18:57:37.771323 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:57:39.496394 kubelet[2723]: E1212 18:57:39.496166 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:57:40.496610 kubelet[2723]: E1212 18:57:40.496444 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:57:42.496404 kubelet[2723]: E1212 18:57:42.496284 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:57:44.499400 kubelet[2723]: E1212 18:57:44.499166 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:57:44.502656 kubelet[2723]: E1212 18:57:44.502568 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:57:46.501121 kubelet[2723]: E1212 18:57:46.501065 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:57:48.497099 kubelet[2723]: E1212 18:57:48.496979 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:57:52.496211 kubelet[2723]: E1212 18:57:52.496170 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:57:52.498020 kubelet[2723]: E1212 18:57:52.497966 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:57:55.494470 kubelet[2723]: E1212 18:57:55.494396 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:57:57.496030 kubelet[2723]: E1212 18:57:57.495721 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:57:57.496030 kubelet[2723]: E1212 18:57:57.495778 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:57:57.499112 kubelet[2723]: E1212 18:57:57.498955 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:57:59.496427 kubelet[2723]: E1212 18:57:59.496021 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:58:00.500844 kubelet[2723]: E1212 18:58:00.500379 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:58:06.498928 kubelet[2723]: E1212 18:58:06.497835 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:58:08.500087 kubelet[2723]: E1212 18:58:08.496769 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:58:08.505314 containerd[1560]: time="2025-12-12T18:58:08.505285838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:58:08.661740 containerd[1560]: time="2025-12-12T18:58:08.661690308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:58:08.662556 containerd[1560]: time="2025-12-12T18:58:08.662523995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:58:08.662871 containerd[1560]: time="2025-12-12T18:58:08.662600666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:58:08.663666 kubelet[2723]: E1212 18:58:08.663602 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:58:08.663666 kubelet[2723]: E1212 18:58:08.663642 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:58:08.663962 kubelet[2723]: E1212 18:58:08.663806 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86b45df6f8-cmnpq_calico-system(a91e52ae-48ed-4331-916f-65e4537bb807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:58:08.666229 containerd[1560]: time="2025-12-12T18:58:08.666125330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:58:08.805978 containerd[1560]: time="2025-12-12T18:58:08.805841612Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:58:08.806890 containerd[1560]: time="2025-12-12T18:58:08.806853733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:58:08.807003 containerd[1560]: time="2025-12-12T18:58:08.806929325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:58:08.807265 kubelet[2723]: E1212 18:58:08.807201 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:58:08.807265 kubelet[2723]: E1212 18:58:08.807246 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:58:08.807487 kubelet[2723]: E1212 18:58:08.807446 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86b45df6f8-cmnpq_calico-system(a91e52ae-48ed-4331-916f-65e4537bb807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:58:08.808019 kubelet[2723]: E1212 18:58:08.807973 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:58:12.496599 containerd[1560]: time="2025-12-12T18:58:12.496309470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:58:12.680956 containerd[1560]: time="2025-12-12T18:58:12.680897499Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:58:12.681833 containerd[1560]: time="2025-12-12T18:58:12.681781600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:58:12.681955 containerd[1560]: time="2025-12-12T18:58:12.681841751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:58:12.682003 kubelet[2723]: E1212 18:58:12.681943 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:58:12.682003 kubelet[2723]: E1212 18:58:12.681976 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:58:12.682442 kubelet[2723]: E1212 18:58:12.682127 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-568b9b9d99-4srkk_calico-apiserver(f4996cbf-b45a-424a-8397-b3ebce94b347): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:58:12.683398 kubelet[2723]: E1212 18:58:12.682515 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:58:12.683520 containerd[1560]: time="2025-12-12T18:58:12.682722922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:58:12.818169 containerd[1560]: time="2025-12-12T18:58:12.818027184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:58:12.819812 containerd[1560]: time="2025-12-12T18:58:12.819704924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:58:12.819812 containerd[1560]: time="2025-12-12T18:58:12.819805396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:58:12.820097 kubelet[2723]: E1212 18:58:12.820039 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:58:12.820144 kubelet[2723]: E1212 18:58:12.820101 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:58:12.821478 kubelet[2723]: E1212 18:58:12.820168 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2wmh6_calico-system(7a1fbc12-082d-4cf2-b63a-aaa492c3ca96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:58:12.821478 kubelet[2723]: E1212 18:58:12.820198 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:58:13.495963 kubelet[2723]: E1212 18:58:13.495901 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:58:14.496814 containerd[1560]: time="2025-12-12T18:58:14.496099458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:58:14.631384 containerd[1560]: time="2025-12-12T18:58:14.631341215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:58:14.632415 containerd[1560]: time="2025-12-12T18:58:14.632378491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:58:14.632700 containerd[1560]: time="2025-12-12T18:58:14.632457293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:58:14.632905 kubelet[2723]: E1212 18:58:14.632776 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:58:14.633212 kubelet[2723]: E1212 18:58:14.632943 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:58:14.633212 kubelet[2723]: E1212 18:58:14.633029 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-684d7d59f5-x5wzd_calico-system(b9c97883-cc24-4c44-982c-86a4cdeab0b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:58:14.633212 kubelet[2723]: E1212 18:58:14.633062 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:58:18.500561 containerd[1560]: time="2025-12-12T18:58:18.500283833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:58:18.642381 containerd[1560]: time="2025-12-12T18:58:18.642317618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:58:18.643397 containerd[1560]: time="2025-12-12T18:58:18.643363167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:58:18.643497 containerd[1560]: time="2025-12-12T18:58:18.643437979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:58:18.643754 kubelet[2723]: E1212 18:58:18.643679 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:58:18.644426 kubelet[2723]: E1212 18:58:18.644216 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:58:18.644426 kubelet[2723]: E1212 18:58:18.644330 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-568b9b9d99-flgkd_calico-apiserver(c6117e7e-1835-4bc6-967b-fc9429542c7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:58:18.644426 kubelet[2723]: E1212 18:58:18.644386 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:58:21.499523 kubelet[2723]: E1212 18:58:21.498141 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:58:23.495318 kubelet[2723]: E1212 18:58:23.495257 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:58:24.498908 containerd[1560]: time="2025-12-12T18:58:24.498856982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:58:24.632617 containerd[1560]: time="2025-12-12T18:58:24.632565836Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:58:24.634246 containerd[1560]: time="2025-12-12T18:58:24.634139225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:58:24.634246 containerd[1560]: time="2025-12-12T18:58:24.634207637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:58:24.634613 kubelet[2723]: E1212 18:58:24.634526 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:58:24.634613 kubelet[2723]: E1212 18:58:24.634608 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:58:24.634960 kubelet[2723]: E1212 18:58:24.634829 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:58:24.637550 containerd[1560]: time="2025-12-12T18:58:24.637511690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:58:24.768096 containerd[1560]: time="2025-12-12T18:58:24.767781868Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:58:24.768944 containerd[1560]: time="2025-12-12T18:58:24.768872892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:58:24.768944 containerd[1560]: time="2025-12-12T18:58:24.768953655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:58:24.769203 kubelet[2723]: E1212 18:58:24.769156 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:58:24.769300 kubelet[2723]: E1212 18:58:24.769259 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:58:24.769620 kubelet[2723]: E1212 18:58:24.769369 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:58:24.769620 kubelet[2723]: E1212 18:58:24.769443 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:58:27.495859 kubelet[2723]: E1212 18:58:27.495808 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:58:27.498492 kubelet[2723]: E1212 18:58:27.496692 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:58:28.498565 kubelet[2723]: E1212 18:58:28.498431 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:58:32.500323 kubelet[2723]: E1212 18:58:32.500263 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:58:33.497347 kubelet[2723]: E1212 18:58:33.497156 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:58:34.499233 kubelet[2723]: E1212 18:58:34.498306 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:58:35.498246 kubelet[2723]: E1212 18:58:35.498195 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:58:38.499510 kubelet[2723]: E1212 18:58:38.499127 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:58:43.495921 kubelet[2723]: E1212 18:58:43.495883 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:58:44.496259 kubelet[2723]: E1212 18:58:44.494801 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:58:46.503487 kubelet[2723]: E1212 18:58:46.503346 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:58:46.503487 kubelet[2723]: E1212 18:58:46.503427 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:58:48.497026 kubelet[2723]: E1212 18:58:48.496632 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:58:49.497014 kubelet[2723]: E1212 18:58:49.496942 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:58:49.497260 kubelet[2723]: E1212 18:58:49.497089 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:58:51.271025 systemd[1]: Started sshd@7-172.237.134.203:22-139.178.68.195:43280.service - OpenSSH per-connection server daemon (139.178.68.195:43280). Dec 12 18:58:51.636234 sshd[5150]: Accepted publickey for core from 139.178.68.195 port 43280 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:58:51.638329 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:58:51.645407 systemd-logind[1533]: New session 8 of user core. Dec 12 18:58:51.653162 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:58:52.009416 sshd[5153]: Connection closed by 139.178.68.195 port 43280 Dec 12 18:58:52.010687 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Dec 12 18:58:52.015754 systemd-logind[1533]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:58:52.016603 systemd[1]: sshd@7-172.237.134.203:22-139.178.68.195:43280.service: Deactivated successfully. Dec 12 18:58:52.020607 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:58:52.022632 systemd-logind[1533]: Removed session 8. Dec 12 18:58:55.495439 kubelet[2723]: E1212 18:58:55.495379 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:58:56.495105 kubelet[2723]: E1212 18:58:56.495052 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:58:57.074724 systemd[1]: Started sshd@8-172.237.134.203:22-139.178.68.195:43284.service - OpenSSH per-connection server daemon (139.178.68.195:43284). Dec 12 18:58:57.415036 sshd[5166]: Accepted publickey for core from 139.178.68.195 port 43284 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:58:57.417001 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:58:57.423034 systemd-logind[1533]: New session 9 of user core. Dec 12 18:58:57.430680 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:58:57.497529 kubelet[2723]: E1212 18:58:57.497438 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:58:57.771507 sshd[5169]: Connection closed by 139.178.68.195 port 43284 Dec 12 18:58:57.772662 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Dec 12 18:58:57.778131 systemd[1]: sshd@8-172.237.134.203:22-139.178.68.195:43284.service: Deactivated successfully. Dec 12 18:58:57.783245 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:58:57.784494 systemd-logind[1533]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:58:57.788353 systemd-logind[1533]: Removed session 9. Dec 12 18:58:58.497045 kubelet[2723]: E1212 18:58:58.496613 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:58:58.501526 kubelet[2723]: E1212 18:58:58.501274 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:58:59.496217 kubelet[2723]: E1212 18:58:59.496162 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:59:02.498428 kubelet[2723]: E1212 18:59:02.498382 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:59:02.834031 systemd[1]: Started sshd@9-172.237.134.203:22-139.178.68.195:41022.service - OpenSSH per-connection server daemon (139.178.68.195:41022). Dec 12 18:59:03.176069 sshd[5182]: Accepted publickey for core from 139.178.68.195 port 41022 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:03.177605 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:03.184925 systemd-logind[1533]: New session 10 of user core. Dec 12 18:59:03.192644 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:59:03.495067 kubelet[2723]: E1212 18:59:03.494900 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:59:03.496370 sshd[5185]: Connection closed by 139.178.68.195 port 41022 Dec 12 18:59:03.497156 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:03.503947 systemd[1]: sshd@9-172.237.134.203:22-139.178.68.195:41022.service: Deactivated successfully. Dec 12 18:59:03.506765 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:59:03.508349 systemd-logind[1533]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:59:03.510683 systemd-logind[1533]: Removed session 10. Dec 12 18:59:04.496449 kubelet[2723]: E1212 18:59:04.496095 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:59:07.494580 kubelet[2723]: E1212 18:59:07.494528 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:59:08.562669 systemd[1]: Started sshd@10-172.237.134.203:22-139.178.68.195:41030.service - OpenSSH per-connection server daemon (139.178.68.195:41030). Dec 12 18:59:08.904037 sshd[5198]: Accepted publickey for core from 139.178.68.195 port 41030 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:08.905085 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:08.917535 systemd-logind[1533]: New session 11 of user core. Dec 12 18:59:08.926582 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:59:09.211647 sshd[5201]: Connection closed by 139.178.68.195 port 41030 Dec 12 18:59:09.212690 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:09.217633 systemd-logind[1533]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:59:09.219851 systemd[1]: sshd@10-172.237.134.203:22-139.178.68.195:41030.service: Deactivated successfully. Dec 12 18:59:09.223110 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:59:09.227858 systemd-logind[1533]: Removed session 11. Dec 12 18:59:09.272209 systemd[1]: Started sshd@11-172.237.134.203:22-139.178.68.195:41038.service - OpenSSH per-connection server daemon (139.178.68.195:41038). Dec 12 18:59:09.497391 kubelet[2723]: E1212 18:59:09.497075 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:59:09.604192 sshd[5214]: Accepted publickey for core from 139.178.68.195 port 41038 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:09.605244 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:09.610838 systemd-logind[1533]: New session 12 of user core. Dec 12 18:59:09.617722 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:59:09.950915 sshd[5217]: Connection closed by 139.178.68.195 port 41038 Dec 12 18:59:09.951673 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:09.959732 systemd-logind[1533]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:59:09.960541 systemd[1]: sshd@11-172.237.134.203:22-139.178.68.195:41038.service: Deactivated successfully. Dec 12 18:59:09.963097 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:59:09.965272 systemd-logind[1533]: Removed session 12. Dec 12 18:59:10.017901 systemd[1]: Started sshd@12-172.237.134.203:22-139.178.68.195:41048.service - OpenSSH per-connection server daemon (139.178.68.195:41048). Dec 12 18:59:10.371645 sshd[5227]: Accepted publickey for core from 139.178.68.195 port 41048 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:10.377781 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:10.387861 systemd-logind[1533]: New session 13 of user core. Dec 12 18:59:10.394584 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:59:10.700874 sshd[5230]: Connection closed by 139.178.68.195 port 41048 Dec 12 18:59:10.703182 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:10.708158 systemd-logind[1533]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:59:10.710270 systemd[1]: sshd@12-172.237.134.203:22-139.178.68.195:41048.service: Deactivated successfully. Dec 12 18:59:10.713306 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:59:10.715878 systemd-logind[1533]: Removed session 13. Dec 12 18:59:12.497542 kubelet[2723]: E1212 18:59:12.497262 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:59:13.495487 kubelet[2723]: E1212 18:59:13.495404 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:59:13.497490 kubelet[2723]: E1212 18:59:13.497400 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:59:14.497172 kubelet[2723]: E1212 18:59:14.497109 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:59:15.495660 kubelet[2723]: E1212 18:59:15.495591 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:59:15.764266 systemd[1]: Started sshd@13-172.237.134.203:22-139.178.68.195:39032.service - OpenSSH per-connection server daemon (139.178.68.195:39032). Dec 12 18:59:16.102814 sshd[5270]: Accepted publickey for core from 139.178.68.195 port 39032 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:16.105992 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:16.113606 systemd-logind[1533]: New session 14 of user core. Dec 12 18:59:16.118622 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:59:16.421681 sshd[5273]: Connection closed by 139.178.68.195 port 39032 Dec 12 18:59:16.422378 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:16.428428 systemd-logind[1533]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:59:16.429904 systemd[1]: sshd@13-172.237.134.203:22-139.178.68.195:39032.service: Deactivated successfully. Dec 12 18:59:16.433091 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:59:16.436852 systemd-logind[1533]: Removed session 14. Dec 12 18:59:17.495120 kubelet[2723]: E1212 18:59:17.495021 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:59:21.489707 systemd[1]: Started sshd@14-172.237.134.203:22-139.178.68.195:51174.service - OpenSSH per-connection server daemon (139.178.68.195:51174). Dec 12 18:59:21.497909 kubelet[2723]: E1212 18:59:21.497792 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:59:21.845589 sshd[5291]: Accepted publickey for core from 139.178.68.195 port 51174 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:21.846587 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:21.856298 systemd-logind[1533]: New session 15 of user core. Dec 12 18:59:21.862045 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:59:22.174497 sshd[5294]: Connection closed by 139.178.68.195 port 51174 Dec 12 18:59:22.175086 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:22.180083 systemd[1]: sshd@14-172.237.134.203:22-139.178.68.195:51174.service: Deactivated successfully. Dec 12 18:59:22.183702 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:59:22.187548 systemd-logind[1533]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:59:22.189230 systemd-logind[1533]: Removed session 15. Dec 12 18:59:22.498132 kubelet[2723]: E1212 18:59:22.497819 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:59:25.496075 kubelet[2723]: E1212 18:59:25.496030 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:59:26.498290 kubelet[2723]: E1212 18:59:26.498240 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:59:27.239650 systemd[1]: Started sshd@15-172.237.134.203:22-139.178.68.195:51184.service - OpenSSH per-connection server daemon (139.178.68.195:51184). Dec 12 18:59:27.598385 sshd[5306]: Accepted publickey for core from 139.178.68.195 port 51184 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:27.600240 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:27.606183 systemd-logind[1533]: New session 16 of user core. Dec 12 18:59:27.614721 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:59:27.918491 sshd[5309]: Connection closed by 139.178.68.195 port 51184 Dec 12 18:59:27.918375 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:27.923009 systemd-logind[1533]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:59:27.923682 systemd[1]: sshd@15-172.237.134.203:22-139.178.68.195:51184.service: Deactivated successfully. Dec 12 18:59:27.925621 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:59:27.927281 systemd-logind[1533]: Removed session 16. Dec 12 18:59:28.498174 kubelet[2723]: E1212 18:59:28.497846 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:59:29.498166 kubelet[2723]: E1212 18:59:29.498080 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:59:29.499026 kubelet[2723]: E1212 18:59:29.498963 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:59:32.498853 kubelet[2723]: E1212 18:59:32.498757 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:59:32.982658 systemd[1]: Started sshd@16-172.237.134.203:22-139.178.68.195:55546.service - OpenSSH per-connection server daemon (139.178.68.195:55546). Dec 12 18:59:33.333006 sshd[5322]: Accepted publickey for core from 139.178.68.195 port 55546 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:33.333997 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:33.340543 systemd-logind[1533]: New session 17 of user core. Dec 12 18:59:33.348730 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:59:33.644746 sshd[5325]: Connection closed by 139.178.68.195 port 55546 Dec 12 18:59:33.646150 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:33.652352 systemd-logind[1533]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:59:33.652670 systemd[1]: sshd@16-172.237.134.203:22-139.178.68.195:55546.service: Deactivated successfully. Dec 12 18:59:33.658801 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:59:33.661031 systemd-logind[1533]: Removed session 17. Dec 12 18:59:33.706817 systemd[1]: Started sshd@17-172.237.134.203:22-139.178.68.195:55562.service - OpenSSH per-connection server daemon (139.178.68.195:55562). Dec 12 18:59:34.063150 sshd[5337]: Accepted publickey for core from 139.178.68.195 port 55562 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:34.065231 sshd-session[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:34.071330 systemd-logind[1533]: New session 18 of user core. Dec 12 18:59:34.078641 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:59:34.498926 sshd[5340]: Connection closed by 139.178.68.195 port 55562 Dec 12 18:59:34.501749 sshd-session[5337]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:34.503617 kubelet[2723]: E1212 18:59:34.502770 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:59:34.510939 systemd[1]: sshd@17-172.237.134.203:22-139.178.68.195:55562.service: Deactivated successfully. Dec 12 18:59:34.513202 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:59:34.517613 systemd-logind[1533]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:59:34.522876 systemd-logind[1533]: Removed session 18. Dec 12 18:59:34.559932 systemd[1]: Started sshd@18-172.237.134.203:22-139.178.68.195:55566.service - OpenSSH per-connection server daemon (139.178.68.195:55566). Dec 12 18:59:34.918679 sshd[5350]: Accepted publickey for core from 139.178.68.195 port 55566 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:34.921675 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:34.930851 systemd-logind[1533]: New session 19 of user core. Dec 12 18:59:34.938643 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:59:35.680636 sshd[5353]: Connection closed by 139.178.68.195 port 55566 Dec 12 18:59:35.682455 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:35.687988 systemd[1]: sshd@18-172.237.134.203:22-139.178.68.195:55566.service: Deactivated successfully. Dec 12 18:59:35.690909 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:59:35.696933 systemd-logind[1533]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:59:35.699532 systemd-logind[1533]: Removed session 19. Dec 12 18:59:35.744545 systemd[1]: Started sshd@19-172.237.134.203:22-139.178.68.195:55570.service - OpenSSH per-connection server daemon (139.178.68.195:55570). Dec 12 18:59:36.089798 sshd[5376]: Accepted publickey for core from 139.178.68.195 port 55570 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:36.090329 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:36.097581 systemd-logind[1533]: New session 20 of user core. Dec 12 18:59:36.104604 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:59:36.524853 sshd[5379]: Connection closed by 139.178.68.195 port 55570 Dec 12 18:59:36.525683 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:36.533900 systemd[1]: sshd@19-172.237.134.203:22-139.178.68.195:55570.service: Deactivated successfully. Dec 12 18:59:36.534039 systemd-logind[1533]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:59:36.539210 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:59:36.542441 systemd-logind[1533]: Removed session 20. Dec 12 18:59:36.588543 systemd[1]: Started sshd@20-172.237.134.203:22-139.178.68.195:55572.service - OpenSSH per-connection server daemon (139.178.68.195:55572). Dec 12 18:59:36.932603 sshd[5389]: Accepted publickey for core from 139.178.68.195 port 55572 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:36.934070 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:36.942917 systemd-logind[1533]: New session 21 of user core. Dec 12 18:59:36.947588 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:59:37.236494 sshd[5394]: Connection closed by 139.178.68.195 port 55572 Dec 12 18:59:37.237378 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:37.244188 systemd[1]: sshd@20-172.237.134.203:22-139.178.68.195:55572.service: Deactivated successfully. Dec 12 18:59:37.244448 systemd-logind[1533]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:59:37.248065 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:59:37.251490 systemd-logind[1533]: Removed session 21. Dec 12 18:59:40.500300 kubelet[2723]: E1212 18:59:40.500243 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:59:40.501272 containerd[1560]: time="2025-12-12T18:59:40.501198969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:59:40.630044 containerd[1560]: time="2025-12-12T18:59:40.629991060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:59:40.631023 containerd[1560]: time="2025-12-12T18:59:40.630864447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:59:40.631023 containerd[1560]: time="2025-12-12T18:59:40.630911934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:59:40.631272 kubelet[2723]: E1212 18:59:40.631100 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:59:40.631272 kubelet[2723]: E1212 18:59:40.631174 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:59:40.631703 kubelet[2723]: E1212 18:59:40.631349 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-568b9b9d99-flgkd_calico-apiserver(c6117e7e-1835-4bc6-967b-fc9429542c7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:59:40.631703 kubelet[2723]: E1212 18:59:40.631383 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:59:40.631852 containerd[1560]: time="2025-12-12T18:59:40.631806940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:59:40.760653 containerd[1560]: time="2025-12-12T18:59:40.760159924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:59:40.761757 containerd[1560]: time="2025-12-12T18:59:40.761319866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:59:40.761757 containerd[1560]: time="2025-12-12T18:59:40.761407082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:59:40.762186 kubelet[2723]: E1212 18:59:40.762058 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:59:40.762186 kubelet[2723]: E1212 18:59:40.762144 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:59:40.762391 kubelet[2723]: E1212 18:59:40.762370 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-568b9b9d99-4srkk_calico-apiserver(f4996cbf-b45a-424a-8397-b3ebce94b347): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:59:40.763170 kubelet[2723]: E1212 18:59:40.763146 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:59:41.496137 containerd[1560]: time="2025-12-12T18:59:41.496062091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:59:41.634190 containerd[1560]: time="2025-12-12T18:59:41.634104354Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:59:41.636185 containerd[1560]: time="2025-12-12T18:59:41.636135366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:59:41.636309 containerd[1560]: time="2025-12-12T18:59:41.636173604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:59:41.636555 kubelet[2723]: E1212 18:59:41.636519 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:59:41.637725 kubelet[2723]: E1212 18:59:41.637698 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:59:41.637796 kubelet[2723]: E1212 18:59:41.637778 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86b45df6f8-cmnpq_calico-system(a91e52ae-48ed-4331-916f-65e4537bb807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:59:41.638790 containerd[1560]: time="2025-12-12T18:59:41.638765049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:59:41.803111 containerd[1560]: time="2025-12-12T18:59:41.802589108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:59:41.804416 containerd[1560]: time="2025-12-12T18:59:41.804257138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:59:41.804416 containerd[1560]: time="2025-12-12T18:59:41.804283787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:59:41.805948 kubelet[2723]: E1212 18:59:41.804531 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:59:41.805948 kubelet[2723]: E1212 18:59:41.804575 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:59:41.805948 kubelet[2723]: E1212 18:59:41.804648 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86b45df6f8-cmnpq_calico-system(a91e52ae-48ed-4331-916f-65e4537bb807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:59:41.806175 kubelet[2723]: E1212 18:59:41.804688 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:59:42.302659 systemd[1]: Started sshd@21-172.237.134.203:22-139.178.68.195:33144.service - OpenSSH per-connection server daemon (139.178.68.195:33144). Dec 12 18:59:42.641325 sshd[5406]: Accepted publickey for core from 139.178.68.195 port 33144 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:42.642414 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:42.647655 systemd-logind[1533]: New session 22 of user core. Dec 12 18:59:42.654579 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:59:42.974101 sshd[5409]: Connection closed by 139.178.68.195 port 33144 Dec 12 18:59:42.975964 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:42.983176 systemd[1]: sshd@21-172.237.134.203:22-139.178.68.195:33144.service: Deactivated successfully. Dec 12 18:59:42.986573 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:59:42.987857 systemd-logind[1533]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:59:42.990109 systemd-logind[1533]: Removed session 22. Dec 12 18:59:44.498368 containerd[1560]: time="2025-12-12T18:59:44.498320696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:59:44.626023 containerd[1560]: time="2025-12-12T18:59:44.625873475Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:59:44.627179 containerd[1560]: time="2025-12-12T18:59:44.627142719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:59:44.627318 containerd[1560]: time="2025-12-12T18:59:44.627288832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:59:44.627791 kubelet[2723]: E1212 18:59:44.627657 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:59:44.627791 kubelet[2723]: E1212 18:59:44.627709 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:59:44.628571 kubelet[2723]: E1212 18:59:44.627964 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2wmh6_calico-system(7a1fbc12-082d-4cf2-b63a-aaa492c3ca96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:59:44.628667 kubelet[2723]: E1212 18:59:44.627999 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96" Dec 12 18:59:46.499997 containerd[1560]: time="2025-12-12T18:59:46.499930651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:59:46.645703 containerd[1560]: time="2025-12-12T18:59:46.645619526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:59:46.647231 containerd[1560]: time="2025-12-12T18:59:46.646971249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:59:46.647329 containerd[1560]: time="2025-12-12T18:59:46.647098124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:59:46.648043 kubelet[2723]: E1212 18:59:46.647943 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:59:46.648043 kubelet[2723]: E1212 18:59:46.648026 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:59:46.648795 kubelet[2723]: E1212 18:59:46.648155 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-684d7d59f5-x5wzd_calico-system(b9c97883-cc24-4c44-982c-86a4cdeab0b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:59:46.648795 kubelet[2723]: E1212 18:59:46.648208 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-684d7d59f5-x5wzd" podUID="b9c97883-cc24-4c44-982c-86a4cdeab0b3" Dec 12 18:59:48.037081 systemd[1]: Started sshd@22-172.237.134.203:22-139.178.68.195:33160.service - OpenSSH per-connection server daemon (139.178.68.195:33160). Dec 12 18:59:48.372149 sshd[5447]: Accepted publickey for core from 139.178.68.195 port 33160 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:48.374160 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:48.381313 systemd-logind[1533]: New session 23 of user core. Dec 12 18:59:48.387683 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:59:48.684658 sshd[5452]: Connection closed by 139.178.68.195 port 33160 Dec 12 18:59:48.686664 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:48.694696 systemd[1]: sshd@22-172.237.134.203:22-139.178.68.195:33160.service: Deactivated successfully. Dec 12 18:59:48.698698 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:59:48.702100 systemd-logind[1533]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:59:48.703384 systemd-logind[1533]: Removed session 23. Dec 12 18:59:51.495370 kubelet[2723]: E1212 18:59:51.495317 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-4srkk" podUID="f4996cbf-b45a-424a-8397-b3ebce94b347" Dec 12 18:59:51.497950 kubelet[2723]: E1212 18:59:51.495931 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-568b9b9d99-flgkd" podUID="c6117e7e-1835-4bc6-967b-fc9429542c7a" Dec 12 18:59:51.498723 containerd[1560]: time="2025-12-12T18:59:51.496285723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:59:51.641539 containerd[1560]: time="2025-12-12T18:59:51.640624990Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:59:51.641539 containerd[1560]: time="2025-12-12T18:59:51.641558766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:59:51.641826 containerd[1560]: time="2025-12-12T18:59:51.641621184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:59:51.642004 kubelet[2723]: E1212 18:59:51.641964 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:59:51.642155 kubelet[2723]: E1212 18:59:51.642046 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:59:51.642365 kubelet[2723]: E1212 18:59:51.642163 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:59:51.643017 containerd[1560]: time="2025-12-12T18:59:51.642991844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:59:51.785311 containerd[1560]: time="2025-12-12T18:59:51.784363648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:59:51.790375 containerd[1560]: time="2025-12-12T18:59:51.790296703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:59:51.790848 containerd[1560]: time="2025-12-12T18:59:51.790687419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:59:51.791032 kubelet[2723]: E1212 18:59:51.790978 2723 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:59:51.791126 kubelet[2723]: E1212 18:59:51.791111 2723 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:59:51.791499 kubelet[2723]: E1212 18:59:51.791405 2723 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xcxxg_calico-system(7adfcf36-f09b-4802-a329-cb264c08cc5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:59:51.791831 kubelet[2723]: E1212 18:59:51.791445 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xcxxg" podUID="7adfcf36-f09b-4802-a329-cb264c08cc5c" Dec 12 18:59:52.499881 kubelet[2723]: E1212 18:59:52.499559 2723 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:59:53.499530 kubelet[2723]: E1212 18:59:53.499484 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b45df6f8-cmnpq" podUID="a91e52ae-48ed-4331-916f-65e4537bb807" Dec 12 18:59:53.751618 systemd[1]: Started sshd@23-172.237.134.203:22-139.178.68.195:42350.service - OpenSSH per-connection server daemon (139.178.68.195:42350). Dec 12 18:59:54.099605 sshd[5464]: Accepted publickey for core from 139.178.68.195 port 42350 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:59:54.103359 sshd-session[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:59:54.111552 systemd-logind[1533]: New session 24 of user core. Dec 12 18:59:54.119602 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:59:54.422329 sshd[5467]: Connection closed by 139.178.68.195 port 42350 Dec 12 18:59:54.422970 sshd-session[5464]: pam_unix(sshd:session): session closed for user core Dec 12 18:59:54.428955 systemd-logind[1533]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:59:54.429683 systemd[1]: sshd@23-172.237.134.203:22-139.178.68.195:42350.service: Deactivated successfully. Dec 12 18:59:54.432837 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:59:54.435221 systemd-logind[1533]: Removed session 24. Dec 12 18:59:55.496487 kubelet[2723]: E1212 18:59:55.496435 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2wmh6" podUID="7a1fbc12-082d-4cf2-b63a-aaa492c3ca96"