Dec 16 12:57:47.912771 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 12:57:47.912810 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 12:57:47.912822 kernel: BIOS-provided physical RAM map: Dec 16 12:57:47.912830 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 16 12:57:47.912838 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 16 12:57:47.912850 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Dec 16 12:57:47.912861 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 16 12:57:47.912870 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Dec 16 12:57:47.912878 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 16 12:57:47.912887 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 16 12:57:47.912895 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 16 12:57:47.912902 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 16 12:57:47.912908 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 16 12:57:47.912923 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 16 12:57:47.912935 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 16 12:57:47.912955 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 16 12:57:47.912962 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 12:57:47.912969 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 12:57:47.912976 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 12:57:47.912987 kernel: NX (Execute Disable) protection: active Dec 16 12:57:47.912994 kernel: APIC: Static calls initialized Dec 16 12:57:47.913001 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Dec 16 12:57:47.913008 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Dec 16 12:57:47.913015 kernel: extended physical RAM map: Dec 16 12:57:47.913022 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 16 12:57:47.913029 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 16 12:57:47.913036 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Dec 16 12:57:47.913043 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 16 12:57:47.913050 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Dec 16 12:57:47.913057 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Dec 16 12:57:47.913066 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Dec 16 12:57:47.913073 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Dec 16 12:57:47.913080 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Dec 16 12:57:47.913087 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 16 12:57:47.913094 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 16 12:57:47.913101 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 16 12:57:47.913108 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 16 12:57:47.913115 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 16 12:57:47.913122 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 16 12:57:47.913129 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 16 12:57:47.913141 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 16 12:57:47.913148 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 12:57:47.913156 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 12:57:47.913163 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 12:57:47.913170 kernel: efi: EFI v2.7 by EDK II Dec 16 12:57:47.913178 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Dec 16 12:57:47.913187 kernel: random: crng init done Dec 16 12:57:47.913195 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 16 12:57:47.913202 kernel: secureboot: Secure boot enabled Dec 16 12:57:47.913209 kernel: SMBIOS 2.8 present. Dec 16 12:57:47.913216 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 16 12:57:47.913224 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:57:47.913231 kernel: Hypervisor detected: KVM Dec 16 12:57:47.913238 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 16 12:57:47.913246 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 12:57:47.913253 kernel: kvm-clock: using sched offset of 5611657444 cycles Dec 16 12:57:47.913261 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:57:47.913268 kernel: tsc: Detected 2794.750 MHz processor Dec 16 12:57:47.913278 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 12:57:47.913285 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 12:57:47.913293 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 16 12:57:47.913300 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 12:57:47.913308 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 12:57:47.913315 kernel: Using GB pages for direct mapping Dec 16 12:57:47.913323 kernel: ACPI: Early table checksum verification disabled Dec 16 12:57:47.913330 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Dec 16 12:57:47.913338 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:57:47.913348 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:47.913355 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:47.913363 kernel: ACPI: FACS 0x000000009BBDD000 000040 Dec 16 12:57:47.913370 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:47.913378 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:47.913385 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:47.913393 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:47.913400 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 12:57:47.913409 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Dec 16 12:57:47.913417 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Dec 16 12:57:47.913424 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Dec 16 12:57:47.913432 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Dec 16 12:57:47.913439 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Dec 16 12:57:47.913447 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Dec 16 12:57:47.913454 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Dec 16 12:57:47.913462 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Dec 16 12:57:47.913469 kernel: No NUMA configuration found Dec 16 12:57:47.913477 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Dec 16 12:57:47.913486 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Dec 16 12:57:47.913493 kernel: Zone ranges: Dec 16 12:57:47.913501 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 12:57:47.913508 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Dec 16 12:57:47.913516 kernel: Normal empty Dec 16 12:57:47.913523 kernel: Device empty Dec 16 12:57:47.913530 kernel: Movable zone start for each node Dec 16 12:57:47.913538 kernel: Early memory node ranges Dec 16 12:57:47.913545 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Dec 16 12:57:47.913555 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Dec 16 12:57:47.913562 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Dec 16 12:57:47.913570 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Dec 16 12:57:47.913577 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Dec 16 12:57:47.913585 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Dec 16 12:57:47.913592 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 12:57:47.913600 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Dec 16 12:57:47.913607 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 12:57:47.913615 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 12:57:47.913624 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 16 12:57:47.913631 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Dec 16 12:57:47.913639 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 12:57:47.913648 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 12:57:47.913657 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 12:57:47.913668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 12:57:47.913678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 12:57:47.913688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 12:57:47.913699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 12:57:47.913713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 12:57:47.913723 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 12:57:47.913734 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 12:57:47.913744 kernel: TSC deadline timer available Dec 16 12:57:47.913754 kernel: CPU topo: Max. logical packages: 1 Dec 16 12:57:47.913765 kernel: CPU topo: Max. logical dies: 1 Dec 16 12:57:47.913784 kernel: CPU topo: Max. dies per package: 1 Dec 16 12:57:47.913794 kernel: CPU topo: Max. threads per core: 1 Dec 16 12:57:47.913802 kernel: CPU topo: Num. cores per package: 4 Dec 16 12:57:47.913810 kernel: CPU topo: Num. threads per package: 4 Dec 16 12:57:47.913818 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 16 12:57:47.913826 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 12:57:47.913833 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 12:57:47.913844 kernel: kvm-guest: setup PV sched yield Dec 16 12:57:47.913852 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 16 12:57:47.913859 kernel: Booting paravirtualized kernel on KVM Dec 16 12:57:47.913868 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 12:57:47.913876 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 16 12:57:47.913886 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 16 12:57:47.913894 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 16 12:57:47.913902 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 16 12:57:47.913909 kernel: kvm-guest: PV spinlocks enabled Dec 16 12:57:47.913926 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 12:57:47.913936 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 12:57:47.913959 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:57:47.913967 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:57:47.913978 kernel: Fallback order for Node 0: 0 Dec 16 12:57:47.913986 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Dec 16 12:57:47.913994 kernel: Policy zone: DMA32 Dec 16 12:57:47.914002 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:57:47.914010 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:57:47.914017 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 12:57:47.914025 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 12:57:47.914033 kernel: Dynamic Preempt: voluntary Dec 16 12:57:47.914041 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:57:47.914052 kernel: rcu: RCU event tracing is enabled. Dec 16 12:57:47.914060 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:57:47.914068 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:57:47.914076 kernel: Rude variant of Tasks RCU enabled. Dec 16 12:57:47.914084 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:57:47.914091 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:57:47.914099 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:57:47.914107 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:57:47.914115 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:57:47.914125 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:57:47.914133 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 16 12:57:47.914141 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:57:47.914149 kernel: Console: colour dummy device 80x25 Dec 16 12:57:47.914157 kernel: printk: legacy console [ttyS0] enabled Dec 16 12:57:47.914165 kernel: ACPI: Core revision 20240827 Dec 16 12:57:47.914173 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 12:57:47.914181 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 12:57:47.914189 kernel: x2apic enabled Dec 16 12:57:47.914199 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 12:57:47.914207 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 12:57:47.914215 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 12:57:47.914222 kernel: kvm-guest: setup PV IPIs Dec 16 12:57:47.914230 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 12:57:47.914238 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 16 12:57:47.914246 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Dec 16 12:57:47.914254 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 12:57:47.914262 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 12:57:47.914272 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 12:57:47.914280 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 12:57:47.914288 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 12:57:47.914296 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 12:57:47.914303 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 12:57:47.914311 kernel: active return thunk: retbleed_return_thunk Dec 16 12:57:47.914319 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 12:57:47.914327 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 12:57:47.914335 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 12:57:47.914345 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 12:57:47.914354 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 12:57:47.914362 kernel: active return thunk: srso_return_thunk Dec 16 12:57:47.914370 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 12:57:47.914378 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 12:57:47.914386 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 12:57:47.914394 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 12:57:47.914401 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 12:57:47.914412 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 12:57:47.914420 kernel: Freeing SMP alternatives memory: 32K Dec 16 12:57:47.914428 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:57:47.914435 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:57:47.914443 kernel: landlock: Up and running. Dec 16 12:57:47.914451 kernel: SELinux: Initializing. Dec 16 12:57:47.914459 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:57:47.914467 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:57:47.914475 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 12:57:47.914485 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 12:57:47.914493 kernel: ... version: 0 Dec 16 12:57:47.914501 kernel: ... bit width: 48 Dec 16 12:57:47.914508 kernel: ... generic registers: 6 Dec 16 12:57:47.914516 kernel: ... value mask: 0000ffffffffffff Dec 16 12:57:47.914524 kernel: ... max period: 00007fffffffffff Dec 16 12:57:47.914532 kernel: ... fixed-purpose events: 0 Dec 16 12:57:47.914539 kernel: ... event mask: 000000000000003f Dec 16 12:57:47.914547 kernel: signal: max sigframe size: 1776 Dec 16 12:57:47.914557 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:57:47.914565 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:57:47.914573 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:57:47.914581 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:57:47.914588 kernel: smpboot: x86: Booting SMP configuration: Dec 16 12:57:47.914596 kernel: .... node #0, CPUs: #1 #2 #3 Dec 16 12:57:47.914604 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:57:47.914612 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Dec 16 12:57:47.914620 kernel: Memory: 2401024K/2552216K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145256K reserved, 0K cma-reserved) Dec 16 12:57:47.914631 kernel: devtmpfs: initialized Dec 16 12:57:47.914638 kernel: x86/mm: Memory block size: 128MB Dec 16 12:57:47.914646 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Dec 16 12:57:47.914654 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Dec 16 12:57:47.914662 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:57:47.914670 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:57:47.914678 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:57:47.914686 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:57:47.914694 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:57:47.914704 kernel: audit: type=2000 audit(1765889865.382:1): state=initialized audit_enabled=0 res=1 Dec 16 12:57:47.914713 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:57:47.914723 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 12:57:47.914733 kernel: cpuidle: using governor menu Dec 16 12:57:47.914744 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:57:47.914755 kernel: dca service started, version 1.12.1 Dec 16 12:57:47.914766 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 16 12:57:47.914777 kernel: PCI: Using configuration type 1 for base access Dec 16 12:57:47.914788 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 12:57:47.914802 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:57:47.914813 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:57:47.914823 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:57:47.914834 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:57:47.914843 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:57:47.914854 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:57:47.914865 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:57:47.914876 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:57:47.914886 kernel: ACPI: Interpreter enabled Dec 16 12:57:47.914900 kernel: ACPI: PM: (supports S0 S5) Dec 16 12:57:47.914910 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 12:57:47.914927 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 12:57:47.914935 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 12:57:47.914966 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 12:57:47.914974 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:57:47.915153 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:57:47.915275 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 12:57:47.915397 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 12:57:47.915408 kernel: PCI host bridge to bus 0000:00 Dec 16 12:57:47.915528 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 12:57:47.915636 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 12:57:47.915760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 12:57:47.915892 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 16 12:57:47.916031 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 16 12:57:47.916143 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 16 12:57:47.916251 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:57:47.916390 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:57:47.916555 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 12:57:47.916684 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 16 12:57:47.916802 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 16 12:57:47.916932 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 12:57:47.917068 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 12:57:47.917223 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:57:47.917345 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 16 12:57:47.917484 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 16 12:57:47.917620 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 16 12:57:47.917747 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 12:57:47.917871 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 16 12:57:47.918046 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 16 12:57:47.918165 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 16 12:57:47.918291 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 12:57:47.918410 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 16 12:57:47.918527 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 16 12:57:47.918645 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 16 12:57:47.918767 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 16 12:57:47.918894 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 12:57:47.919042 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 12:57:47.919170 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 12:57:47.919288 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 16 12:57:47.919404 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 16 12:57:47.919529 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 12:57:47.919650 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 16 12:57:47.919660 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 12:57:47.919669 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 12:57:47.919677 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 12:57:47.919685 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 12:57:47.919693 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 12:57:47.919701 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 12:57:47.919709 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 12:57:47.919720 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 12:57:47.919728 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 12:57:47.919736 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 12:57:47.919744 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 12:57:47.919752 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 12:57:47.919759 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 12:57:47.919767 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 12:57:47.919775 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 12:57:47.919783 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 12:57:47.919793 kernel: iommu: Default domain type: Translated Dec 16 12:57:47.919801 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 12:57:47.919810 kernel: efivars: Registered efivars operations Dec 16 12:57:47.919817 kernel: PCI: Using ACPI for IRQ routing Dec 16 12:57:47.919825 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 12:57:47.919833 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Dec 16 12:57:47.919841 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Dec 16 12:57:47.919849 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Dec 16 12:57:47.919857 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Dec 16 12:57:47.919866 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Dec 16 12:57:47.920007 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 12:57:47.920126 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 12:57:47.920242 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 12:57:47.920253 kernel: vgaarb: loaded Dec 16 12:57:47.920261 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 12:57:47.920269 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 12:57:47.920277 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 12:57:47.920289 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:57:47.920297 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:57:47.920305 kernel: pnp: PnP ACPI init Dec 16 12:57:47.920434 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 16 12:57:47.920446 kernel: pnp: PnP ACPI: found 6 devices Dec 16 12:57:47.920454 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 12:57:47.920462 kernel: NET: Registered PF_INET protocol family Dec 16 12:57:47.920470 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:57:47.920481 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:57:47.920489 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:57:47.920497 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:57:47.920505 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:57:47.920513 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:57:47.920521 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:57:47.920529 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:57:47.920537 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:57:47.920545 kernel: NET: Registered PF_XDP protocol family Dec 16 12:57:47.920666 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 16 12:57:47.920800 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 16 12:57:47.920910 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 12:57:47.921043 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 12:57:47.921151 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 12:57:47.921259 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 16 12:57:47.921366 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 16 12:57:47.921472 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 16 12:57:47.921488 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:57:47.921496 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 16 12:57:47.921504 kernel: Initialise system trusted keyrings Dec 16 12:57:47.921512 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:57:47.921520 kernel: Key type asymmetric registered Dec 16 12:57:47.921528 kernel: Asymmetric key parser 'x509' registered Dec 16 12:57:47.921552 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 12:57:47.921563 kernel: io scheduler mq-deadline registered Dec 16 12:57:47.921571 kernel: io scheduler kyber registered Dec 16 12:57:47.921581 kernel: io scheduler bfq registered Dec 16 12:57:47.921589 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 12:57:47.921598 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 12:57:47.921606 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 12:57:47.921615 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 12:57:47.921623 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:57:47.921643 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 12:57:47.921652 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 12:57:47.921660 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 12:57:47.921671 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 12:57:47.921837 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 16 12:57:47.921852 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 12:57:47.921988 kernel: rtc_cmos 00:04: registered as rtc0 Dec 16 12:57:47.922102 kernel: rtc_cmos 00:04: setting system clock to 2025-12-16T12:57:47 UTC (1765889867) Dec 16 12:57:47.922214 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 16 12:57:47.922225 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 12:57:47.922238 kernel: efifb: probing for efifb Dec 16 12:57:47.922246 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 16 12:57:47.922255 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 16 12:57:47.922263 kernel: efifb: scrolling: redraw Dec 16 12:57:47.922271 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:57:47.922280 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 12:57:47.922292 kernel: fb0: EFI VGA frame buffer device Dec 16 12:57:47.922300 kernel: pstore: Using crash dump compression: deflate Dec 16 12:57:47.922309 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 12:57:47.922317 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:57:47.922325 kernel: Segment Routing with IPv6 Dec 16 12:57:47.922334 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:57:47.922342 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:57:47.922350 kernel: Key type dns_resolver registered Dec 16 12:57:47.922358 kernel: IPI shorthand broadcast: enabled Dec 16 12:57:47.922369 kernel: sched_clock: Marking stable (3066002514, 251298284)->(3434754093, -117453295) Dec 16 12:57:47.922377 kernel: registered taskstats version 1 Dec 16 12:57:47.922385 kernel: Loading compiled-in X.509 certificates Dec 16 12:57:47.922394 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 12:57:47.922402 kernel: Demotion targets for Node 0: null Dec 16 12:57:47.922410 kernel: Key type .fscrypt registered Dec 16 12:57:47.922419 kernel: Key type fscrypt-provisioning registered Dec 16 12:57:47.922427 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:57:47.922435 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:57:47.922446 kernel: ima: No architecture policies found Dec 16 12:57:47.922454 kernel: clk: Disabling unused clocks Dec 16 12:57:47.922462 kernel: Warning: unable to open an initial console. Dec 16 12:57:47.922470 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 12:57:47.922479 kernel: Write protecting the kernel read-only data: 40960k Dec 16 12:57:47.922487 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 12:57:47.922495 kernel: Run /init as init process Dec 16 12:57:47.922503 kernel: with arguments: Dec 16 12:57:47.922511 kernel: /init Dec 16 12:57:47.922522 kernel: with environment: Dec 16 12:57:47.922530 kernel: HOME=/ Dec 16 12:57:47.922538 kernel: TERM=linux Dec 16 12:57:47.922553 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:57:47.922565 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:57:47.922575 systemd[1]: Detected virtualization kvm. Dec 16 12:57:47.922583 systemd[1]: Detected architecture x86-64. Dec 16 12:57:47.922594 systemd[1]: Running in initrd. Dec 16 12:57:47.922603 systemd[1]: No hostname configured, using default hostname. Dec 16 12:57:47.922612 systemd[1]: Hostname set to . Dec 16 12:57:47.922620 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:57:47.922629 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:57:47.922638 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:57:47.922647 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:57:47.922656 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:57:47.922667 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:57:47.922676 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:57:47.922686 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:57:47.922696 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:57:47.922705 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:57:47.922713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:57:47.922722 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:57:47.922733 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:57:47.922742 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:57:47.922751 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:57:47.922760 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:57:47.922768 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:57:47.922777 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:57:47.922786 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:57:47.922795 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:57:47.922804 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:57:47.922815 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:57:47.922823 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:57:47.922832 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:57:47.922841 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:57:47.922849 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:57:47.922858 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:57:47.922867 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:57:47.922876 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:57:47.922888 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:57:47.922897 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:57:47.922905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:57:47.922951 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:57:47.922962 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:57:47.922974 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:57:47.922983 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:57:47.922992 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:47.923026 systemd-journald[200]: Collecting audit messages is disabled. Dec 16 12:57:47.923049 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:57:47.923058 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:57:47.923067 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:57:47.923077 systemd-journald[200]: Journal started Dec 16 12:57:47.923099 systemd-journald[200]: Runtime Journal (/run/log/journal/3d67c03cb69249faae4e3deaddcbfc68) is 5.9M, max 47.9M, 41.9M free. Dec 16 12:57:47.935023 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:57:47.935046 kernel: Bridge firewalling registered Dec 16 12:57:47.899141 systemd-modules-load[202]: Inserted module 'overlay' Dec 16 12:57:47.933037 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 16 12:57:47.939690 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:57:47.939705 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:57:47.943819 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:57:47.949036 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:57:47.954233 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:57:47.960277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:57:47.963139 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:57:47.967475 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:57:47.971547 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:57:47.973803 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:57:47.979104 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:57:48.008774 dracut-cmdline[240]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 12:57:48.030388 systemd-resolved[241]: Positive Trust Anchors: Dec 16 12:57:48.030406 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:57:48.030436 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:57:48.033312 systemd-resolved[241]: Defaulting to hostname 'linux'. Dec 16 12:57:48.044354 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:57:48.049610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:57:48.127978 kernel: SCSI subsystem initialized Dec 16 12:57:48.138973 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:57:48.152990 kernel: iscsi: registered transport (tcp) Dec 16 12:57:48.175981 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:57:48.176030 kernel: QLogic iSCSI HBA Driver Dec 16 12:57:48.197073 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:57:48.225667 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:57:48.227518 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:57:48.292784 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:57:48.295083 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:57:48.362993 kernel: raid6: avx2x4 gen() 29540 MB/s Dec 16 12:57:48.379977 kernel: raid6: avx2x2 gen() 28769 MB/s Dec 16 12:57:48.397913 kernel: raid6: avx2x1 gen() 25113 MB/s Dec 16 12:57:48.397957 kernel: raid6: using algorithm avx2x4 gen() 29540 MB/s Dec 16 12:57:48.415817 kernel: raid6: .... xor() 7734 MB/s, rmw enabled Dec 16 12:57:48.415883 kernel: raid6: using avx2x2 recovery algorithm Dec 16 12:57:48.437979 kernel: xor: automatically using best checksumming function avx Dec 16 12:57:48.607989 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:57:48.618174 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:57:48.621344 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:57:48.652839 systemd-udevd[452]: Using default interface naming scheme 'v255'. Dec 16 12:57:48.658436 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:57:48.664476 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:57:48.691247 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Dec 16 12:57:48.731067 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:57:48.733224 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:57:48.820929 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:57:48.822873 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:57:48.864976 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 12:57:48.872084 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 12:57:48.873985 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 12:57:48.881362 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:57:48.881406 kernel: GPT:9289727 != 19775487 Dec 16 12:57:48.881417 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:57:48.881427 kernel: GPT:9289727 != 19775487 Dec 16 12:57:48.881436 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:57:48.881446 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:57:48.889977 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 12:57:48.895964 kernel: libata version 3.00 loaded. Dec 16 12:57:48.905241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:57:48.905496 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:48.915920 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 12:57:48.916176 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 12:57:48.916189 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 12:57:48.916332 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 12:57:48.916467 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 12:57:48.916598 kernel: AES CTR mode by8 optimization enabled Dec 16 12:57:48.917287 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:57:48.922855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:57:48.929727 kernel: scsi host0: ahci Dec 16 12:57:48.928005 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:57:48.935958 kernel: scsi host1: ahci Dec 16 12:57:48.939016 kernel: scsi host2: ahci Dec 16 12:57:48.941992 kernel: scsi host3: ahci Dec 16 12:57:48.944042 kernel: scsi host4: ahci Dec 16 12:57:48.944219 kernel: scsi host5: ahci Dec 16 12:57:48.944364 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Dec 16 12:57:48.945978 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Dec 16 12:57:48.946004 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Dec 16 12:57:48.946018 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Dec 16 12:57:48.946030 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Dec 16 12:57:48.946040 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Dec 16 12:57:48.966170 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:57:48.974553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:48.991388 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:57:49.001377 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 12:57:49.003373 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:57:49.015908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:57:49.017614 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:57:49.052276 disk-uuid[600]: Primary Header is updated. Dec 16 12:57:49.052276 disk-uuid[600]: Secondary Entries is updated. Dec 16 12:57:49.052276 disk-uuid[600]: Secondary Header is updated. Dec 16 12:57:49.057977 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:57:49.062966 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:57:49.255743 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 12:57:49.255827 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 12:57:49.255969 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 12:57:49.257991 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 12:57:49.258982 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 12:57:49.261993 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 12:57:49.262016 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 12:57:49.263802 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 12:57:49.263826 kernel: ata3.00: applying bridge limits Dec 16 12:57:49.265723 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 12:57:49.265747 kernel: ata3.00: configured for UDMA/100 Dec 16 12:57:49.270006 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 12:57:49.328510 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 12:57:49.328820 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:57:49.342974 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 12:57:49.722712 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:57:49.725166 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:57:49.728197 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:57:49.730146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:57:49.734687 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:57:49.769234 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:57:50.063982 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:57:50.064542 disk-uuid[601]: The operation has completed successfully. Dec 16 12:57:50.093747 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:57:50.093881 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:57:50.131342 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:57:50.150179 sh[642]: Success Dec 16 12:57:50.169467 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:57:50.169501 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:57:50.171156 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:57:50.181984 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 12:57:50.210585 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:57:50.213398 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:57:50.239052 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:57:50.248752 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (654) Dec 16 12:57:50.248775 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 12:57:50.248785 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:57:50.253959 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:57:50.253977 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:57:50.255259 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:57:50.257190 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:57:50.258640 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:57:50.259461 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:57:50.267586 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:57:50.291990 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (687) Dec 16 12:57:50.295427 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 12:57:50.295452 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:57:50.299974 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:57:50.299997 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:57:50.305983 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 12:57:50.307095 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:57:50.310872 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:57:50.398741 ignition[734]: Ignition 2.22.0 Dec 16 12:57:50.398755 ignition[734]: Stage: fetch-offline Dec 16 12:57:50.398795 ignition[734]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:50.398808 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:50.403049 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:57:50.398914 ignition[734]: parsed url from cmdline: "" Dec 16 12:57:50.406286 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:57:50.398919 ignition[734]: no config URL provided Dec 16 12:57:50.398925 ignition[734]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:57:50.398935 ignition[734]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:57:50.398971 ignition[734]: op(1): [started] loading QEMU firmware config module Dec 16 12:57:50.398977 ignition[734]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:57:50.423989 ignition[734]: op(1): [finished] loading QEMU firmware config module Dec 16 12:57:50.452776 systemd-networkd[831]: lo: Link UP Dec 16 12:57:50.452788 systemd-networkd[831]: lo: Gained carrier Dec 16 12:57:50.455832 systemd-networkd[831]: Enumeration completed Dec 16 12:57:50.457372 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:57:50.461759 systemd-networkd[831]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:57:50.461769 systemd-networkd[831]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:57:50.468269 systemd-networkd[831]: eth0: Link UP Dec 16 12:57:50.468461 systemd-networkd[831]: eth0: Gained carrier Dec 16 12:57:50.468473 systemd-networkd[831]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:57:50.471556 systemd[1]: Reached target network.target - Network. Dec 16 12:57:50.496986 systemd-networkd[831]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:57:50.529647 ignition[734]: parsing config with SHA512: b105a483269d3a4e27fcc75b529f596394d92fce5c2e9341544b8966c864a9c77e87db2836bcb0754d9fbfddb47a9c18a1a875281a15dd9384460d0f489802a0 Dec 16 12:57:50.534364 unknown[734]: fetched base config from "system" Dec 16 12:57:50.534375 unknown[734]: fetched user config from "qemu" Dec 16 12:57:50.534717 ignition[734]: fetch-offline: fetch-offline passed Dec 16 12:57:50.537252 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:57:50.534770 ignition[734]: Ignition finished successfully Dec 16 12:57:50.541055 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:57:50.542064 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:57:50.592537 ignition[837]: Ignition 2.22.0 Dec 16 12:57:50.592550 ignition[837]: Stage: kargs Dec 16 12:57:50.592682 ignition[837]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:50.592692 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:50.593398 ignition[837]: kargs: kargs passed Dec 16 12:57:50.593443 ignition[837]: Ignition finished successfully Dec 16 12:57:50.600294 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:57:50.603014 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:57:50.639568 ignition[846]: Ignition 2.22.0 Dec 16 12:57:50.639582 ignition[846]: Stage: disks Dec 16 12:57:50.639709 ignition[846]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:50.639720 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:50.640409 ignition[846]: disks: disks passed Dec 16 12:57:50.640447 ignition[846]: Ignition finished successfully Dec 16 12:57:50.646361 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:57:50.648740 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:57:50.649538 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:57:50.655546 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:57:50.660390 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:57:50.663491 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:57:50.668881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:57:50.706818 systemd-fsck[856]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:57:50.715003 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:57:50.717528 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:57:50.832969 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 12:57:50.833182 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:57:50.836240 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:57:50.840835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:57:50.844672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:57:50.847708 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:57:50.847756 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:57:50.847779 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:57:50.859291 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:57:50.863597 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:57:50.874033 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (864) Dec 16 12:57:50.874056 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 12:57:50.874068 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:57:50.874079 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:57:50.874089 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:57:50.875651 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:57:50.903106 initrd-setup-root[888]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:57:50.907180 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:57:50.912615 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:57:50.917568 initrd-setup-root[909]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:57:51.002932 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:57:51.006107 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:57:51.008550 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:57:51.024960 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 12:57:51.036679 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:57:51.053609 ignition[978]: INFO : Ignition 2.22.0 Dec 16 12:57:51.053609 ignition[978]: INFO : Stage: mount Dec 16 12:57:51.056160 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:51.056160 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:51.060074 ignition[978]: INFO : mount: mount passed Dec 16 12:57:51.061273 ignition[978]: INFO : Ignition finished successfully Dec 16 12:57:51.065465 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:57:51.069559 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:57:51.251583 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:57:51.253627 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:57:51.288968 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (990) Dec 16 12:57:51.292244 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 12:57:51.292273 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:57:51.295818 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:57:51.295862 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:57:51.297598 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:57:51.336039 ignition[1007]: INFO : Ignition 2.22.0 Dec 16 12:57:51.336039 ignition[1007]: INFO : Stage: files Dec 16 12:57:51.339057 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:51.339057 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:51.339057 ignition[1007]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:57:51.345559 ignition[1007]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:57:51.345559 ignition[1007]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:57:51.352524 ignition[1007]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:57:51.355236 ignition[1007]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:57:51.357687 ignition[1007]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:57:51.355745 unknown[1007]: wrote ssh authorized keys file for user: core Dec 16 12:57:51.362504 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:57:51.362504 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 12:57:51.397149 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:57:51.467319 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:57:51.467319 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:57:51.474107 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:57:51.474107 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:57:51.474107 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:57:51.474107 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:57:51.474107 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:57:51.474107 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:57:51.492774 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:57:51.492774 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:57:51.492774 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:57:51.492774 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:57:51.492774 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:57:51.492774 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:57:51.492774 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 12:57:51.759207 systemd-networkd[831]: eth0: Gained IPv6LL Dec 16 12:57:51.924150 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:57:52.296994 ignition[1007]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:57:52.301024 ignition[1007]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:57:52.303759 ignition[1007]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:57:52.311894 ignition[1007]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:57:52.311894 ignition[1007]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:57:52.311894 ignition[1007]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:57:52.319076 ignition[1007]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:57:52.319076 ignition[1007]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:57:52.319076 ignition[1007]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:57:52.319076 ignition[1007]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:57:52.338305 ignition[1007]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:57:52.345140 ignition[1007]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:57:52.347686 ignition[1007]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:57:52.347686 ignition[1007]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:57:52.347686 ignition[1007]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:57:52.347686 ignition[1007]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:57:52.347686 ignition[1007]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:57:52.347686 ignition[1007]: INFO : files: files passed Dec 16 12:57:52.347686 ignition[1007]: INFO : Ignition finished successfully Dec 16 12:57:52.355297 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:57:52.360019 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:57:52.368176 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:57:52.375566 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:57:52.375741 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:57:52.382529 initrd-setup-root-after-ignition[1036]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:57:52.387433 initrd-setup-root-after-ignition[1038]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:57:52.390115 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:57:52.389610 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:57:52.395011 initrd-setup-root-after-ignition[1038]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:57:52.391259 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:57:52.396202 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:57:52.477247 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:57:52.477391 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:57:52.481368 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:57:52.481751 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:57:52.487021 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:57:52.488058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:57:52.515697 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:57:52.517575 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:57:52.542267 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:57:52.543030 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:57:52.546651 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:57:52.550552 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:57:52.550663 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:57:52.555881 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:57:52.556797 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:57:52.561361 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:57:52.564421 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:57:52.567475 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:57:52.571322 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:57:52.574482 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:57:52.577923 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:57:52.582392 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:57:52.586133 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:57:52.589259 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:57:52.590510 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:57:52.590636 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:57:52.596736 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:57:52.600025 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:57:52.600983 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:57:52.605328 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:57:52.606740 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:57:52.606852 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:57:52.614210 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:57:52.614322 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:57:52.617749 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:57:52.618578 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:57:52.626016 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:57:52.626773 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:57:52.631012 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:57:52.633786 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:57:52.633920 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:57:52.636617 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:57:52.636719 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:57:52.639440 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:57:52.639574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:57:52.642405 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:57:52.642525 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:57:52.646482 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:57:52.653435 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:57:52.654156 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:57:52.654304 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:57:52.654792 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:57:52.654932 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:57:52.669252 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:57:52.669374 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:57:52.691523 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:57:52.696957 ignition[1063]: INFO : Ignition 2.22.0 Dec 16 12:57:52.696957 ignition[1063]: INFO : Stage: umount Dec 16 12:57:52.696957 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:52.696957 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:52.702512 ignition[1063]: INFO : umount: umount passed Dec 16 12:57:52.702512 ignition[1063]: INFO : Ignition finished successfully Dec 16 12:57:52.701500 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:57:52.701632 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:57:52.705195 systemd[1]: Stopped target network.target - Network. Dec 16 12:57:52.707578 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:57:52.707633 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:57:52.710475 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:57:52.710524 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:57:52.711516 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:57:52.711577 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:57:52.716640 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:57:52.716693 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:57:52.717669 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:57:52.721865 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:57:52.731892 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:57:52.732086 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:57:52.738314 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:57:52.738598 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:57:52.738645 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:57:52.743844 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:57:52.753762 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:57:52.753918 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:57:52.759730 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:57:52.760456 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:57:52.765389 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:57:52.765448 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:57:52.769559 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:57:52.771299 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:57:52.771351 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:57:52.775670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:57:52.775734 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:57:52.783250 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:57:52.783300 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:57:52.784387 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:57:52.789957 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:57:52.818045 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:57:52.818211 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:57:52.823773 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:57:52.824085 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:57:52.826699 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:57:52.826761 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:57:52.832654 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:57:52.832701 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:57:52.835851 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:57:52.835904 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:57:52.840563 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:57:52.840625 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:57:52.844261 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:57:52.844313 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:57:52.856304 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:57:52.857292 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:57:52.857372 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:57:52.865818 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:57:52.865875 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:57:52.871734 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:57:52.871825 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:57:52.877928 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:57:52.878003 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:57:52.878842 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:57:52.878895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:52.891102 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:57:52.891219 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:57:52.895384 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:57:52.895475 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:57:52.899615 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:57:52.899753 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:57:52.901192 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:57:52.906305 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:57:52.940079 systemd[1]: Switching root. Dec 16 12:57:52.983911 systemd-journald[200]: Journal stopped Dec 16 12:57:54.348539 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Dec 16 12:57:54.348608 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:57:54.348625 kernel: SELinux: policy capability open_perms=1 Dec 16 12:57:54.348645 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:57:54.348659 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:57:54.348677 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:57:54.348691 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:57:54.348708 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:57:54.348722 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:57:54.348736 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:57:54.348766 kernel: audit: type=1403 audit(1765889873.427:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:57:54.348783 systemd[1]: Successfully loaded SELinux policy in 68.643ms. Dec 16 12:57:54.348816 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.360ms. Dec 16 12:57:54.348833 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:57:54.348853 systemd[1]: Detected virtualization kvm. Dec 16 12:57:54.348867 systemd[1]: Detected architecture x86-64. Dec 16 12:57:54.348880 systemd[1]: Detected first boot. Dec 16 12:57:54.348895 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:57:54.348910 zram_generator::config[1110]: No configuration found. Dec 16 12:57:54.348927 kernel: Guest personality initialized and is inactive Dec 16 12:57:54.348956 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 12:57:54.348971 kernel: Initialized host personality Dec 16 12:57:54.348985 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:57:54.349003 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:57:54.349018 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:57:54.349033 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:57:54.349047 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:57:54.349062 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:57:54.349079 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:57:54.349094 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:57:54.349118 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:57:54.349150 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:57:54.349168 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:57:54.349181 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:57:54.349193 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:57:54.349205 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:57:54.349217 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:57:54.349229 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:57:54.349241 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:57:54.349253 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:57:54.349268 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:57:54.349280 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:57:54.349292 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 12:57:54.349304 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:57:54.349321 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:57:54.349333 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:57:54.349344 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:57:54.349358 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:57:54.349370 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:57:54.349384 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:57:54.349400 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:57:54.349412 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:57:54.349424 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:57:54.349436 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:57:54.349447 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:57:54.349459 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:57:54.349471 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:57:54.349485 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:57:54.349496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:57:54.349508 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:57:54.349519 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:57:54.349531 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:57:54.349543 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:57:54.349555 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:54.349566 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:57:54.349578 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:57:54.349592 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:57:54.349605 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:57:54.349617 systemd[1]: Reached target machines.target - Containers. Dec 16 12:57:54.349629 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:57:54.349642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:57:54.349654 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:57:54.349665 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:57:54.349678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:57:54.349692 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:57:54.349704 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:57:54.349715 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:57:54.349727 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:57:54.349739 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:57:54.349751 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:57:54.349772 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:57:54.349784 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:57:54.349800 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:57:54.349812 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:57:54.349825 kernel: fuse: init (API version 7.41) Dec 16 12:57:54.349837 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:57:54.349849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:57:54.349863 kernel: loop: module loaded Dec 16 12:57:54.349874 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:57:54.349910 systemd-journald[1195]: Collecting audit messages is disabled. Dec 16 12:57:54.349976 kernel: ACPI: bus type drm_connector registered Dec 16 12:57:54.349993 systemd-journald[1195]: Journal started Dec 16 12:57:54.350016 systemd-journald[1195]: Runtime Journal (/run/log/journal/3d67c03cb69249faae4e3deaddcbfc68) is 5.9M, max 47.9M, 41.9M free. Dec 16 12:57:54.012287 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:57:54.036599 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:57:54.037149 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:57:54.395970 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:57:54.407424 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:57:54.422462 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:57:54.425515 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:57:54.425548 systemd[1]: Stopped verity-setup.service. Dec 16 12:57:54.430974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:54.438002 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:57:54.439214 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:57:54.441603 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:57:54.444137 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:57:54.446255 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:57:54.448675 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:57:54.451230 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:57:54.453669 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:57:54.456732 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:57:54.459693 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:57:54.460149 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:57:54.463203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:57:54.463517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:57:54.465860 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:57:54.466092 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:57:54.468518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:57:54.468746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:57:54.471364 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:57:54.471591 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:57:54.474059 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:57:54.474283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:57:54.476481 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:57:54.479260 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:57:54.482489 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:57:54.485974 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:57:54.509830 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:57:54.514463 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:57:54.519921 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:57:54.581344 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:57:54.581407 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:57:54.584668 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:57:54.588427 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:57:54.590408 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:57:54.591677 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:57:54.604802 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:57:54.607170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:57:54.610046 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:57:54.612134 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:57:54.613533 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:57:54.616013 systemd-journald[1195]: Time spent on flushing to /var/log/journal/3d67c03cb69249faae4e3deaddcbfc68 is 25.136ms for 1036 entries. Dec 16 12:57:54.616013 systemd-journald[1195]: System Journal (/var/log/journal/3d67c03cb69249faae4e3deaddcbfc68) is 8M, max 195.6M, 187.6M free. Dec 16 12:57:55.237671 systemd-journald[1195]: Received client request to flush runtime journal. Dec 16 12:57:55.237725 kernel: loop0: detected capacity change from 0 to 229808 Dec 16 12:57:55.237752 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:57:55.237769 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 12:57:55.237784 kernel: loop2: detected capacity change from 0 to 110984 Dec 16 12:57:55.237800 kernel: loop3: detected capacity change from 0 to 229808 Dec 16 12:57:55.237816 kernel: loop4: detected capacity change from 0 to 128560 Dec 16 12:57:55.237831 kernel: loop5: detected capacity change from 0 to 110984 Dec 16 12:57:55.237847 zram_generator::config[1270]: No configuration found. Dec 16 12:57:54.618308 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:57:54.622719 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:57:54.627196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:57:54.630705 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:57:54.632824 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:57:54.772239 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Dec 16 12:57:54.772251 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Dec 16 12:57:54.772411 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:57:54.781903 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:57:54.792121 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:57:55.140217 (sd-merge)[1244]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 12:57:55.140811 (sd-merge)[1244]: Merged extensions into '/usr'. Dec 16 12:57:55.145991 systemd[1]: Reload requested from client PID 1229 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:57:55.146001 systemd[1]: Reloading... Dec 16 12:57:55.447141 ldconfig[1224]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:57:55.462626 systemd[1]: Reloading finished in 316 ms. Dec 16 12:57:55.496497 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:57:55.499265 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:57:55.502097 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:57:55.505104 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:57:55.508074 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:57:55.525961 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:57:55.552355 systemd[1]: Starting ensure-sysext.service... Dec 16 12:57:55.555041 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:57:55.558216 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:57:55.564108 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:57:55.573366 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:57:55.585480 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:57:55.586643 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Dec 16 12:57:55.586735 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Dec 16 12:57:55.586754 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:57:55.589578 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:57:55.589759 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:57:55.590062 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:57:55.590339 systemd[1]: Reload requested from client PID 1313 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:57:55.590374 systemd[1]: Reloading... Dec 16 12:57:55.590396 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:57:55.591235 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:57:55.591488 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Dec 16 12:57:55.591566 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Dec 16 12:57:55.596054 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:57:55.596068 systemd-tmpfiles[1316]: Skipping /boot Dec 16 12:57:55.606685 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:57:55.606699 systemd-tmpfiles[1316]: Skipping /boot Dec 16 12:57:55.640986 zram_generator::config[1346]: No configuration found. Dec 16 12:57:55.811987 systemd[1]: Reloading finished in 221 ms. Dec 16 12:57:55.853812 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:57:55.856623 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:57:55.866562 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:57:55.869679 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:57:55.873078 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:57:55.888149 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:57:55.892639 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:57:55.897128 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:57:55.902293 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:55.902465 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:57:55.903574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:57:55.915340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:57:55.922220 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:57:55.924606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:57:55.924770 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:57:55.929317 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:57:55.931516 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:55.933874 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:57:55.937061 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:57:55.937351 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:57:55.940396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:57:55.946367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:57:55.947423 systemd-udevd[1390]: Using default interface naming scheme 'v255'. Dec 16 12:57:55.951040 augenrules[1414]: No rules Dec 16 12:57:55.954622 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:57:55.955161 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:57:55.958411 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:57:55.958689 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:57:55.971580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:55.972246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:57:55.975207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:57:55.979203 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:57:55.983311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:57:55.985418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:57:55.985551 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:57:55.988250 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:57:55.991021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:55.992098 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:57:55.998192 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:57:56.005000 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:57:56.016869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:57:56.022269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:57:56.034054 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:57:56.034359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:57:56.037115 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:57:56.039478 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:57:56.039725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:57:56.046367 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:57:56.053935 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:56.194882 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:57:56.197314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:57:56.200270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:57:56.204454 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:57:56.207432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:57:56.207600 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:57:56.211315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:57:56.213124 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:57:56.213299 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:57:56.213445 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:56.219967 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:57:56.244641 augenrules[1465]: /sbin/augenrules: No change Dec 16 12:57:56.246043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:57:56.247029 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:57:56.250066 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:57:56.250372 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:57:56.254531 augenrules[1496]: No rules Dec 16 12:57:56.258548 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:57:56.259313 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:57:56.262012 systemd[1]: Finished ensure-sysext.service. Dec 16 12:57:56.276154 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 12:57:56.297582 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 12:57:56.307318 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 16 12:57:56.312322 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 12:57:56.312503 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 12:57:56.316960 kernel: ACPI: button: Power Button [PWRF] Dec 16 12:57:56.358219 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:57:56.369805 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:57:56.372202 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:57:56.406508 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:57:56.440195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:57:56.442546 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:57:56.454238 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:57:56.455411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:56.459473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:57:56.511604 systemd-resolved[1388]: Positive Trust Anchors: Dec 16 12:57:56.511986 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:57:56.512027 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:57:56.514827 kernel: kvm_amd: TSC scaling supported Dec 16 12:57:56.514879 kernel: kvm_amd: Nested Virtualization enabled Dec 16 12:57:56.514905 kernel: kvm_amd: Nested Paging enabled Dec 16 12:57:56.514925 kernel: kvm_amd: LBR virtualization supported Dec 16 12:57:56.516700 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 16 12:57:56.516739 kernel: kvm_amd: Virtual GIF supported Dec 16 12:57:56.518418 systemd-resolved[1388]: Defaulting to hostname 'linux'. Dec 16 12:57:56.525061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:57:56.525879 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:57:56.563351 systemd-networkd[1477]: lo: Link UP Dec 16 12:57:56.563685 systemd-networkd[1477]: lo: Gained carrier Dec 16 12:57:56.565422 systemd-networkd[1477]: Enumeration completed Dec 16 12:57:56.566046 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:57:56.566496 systemd[1]: Reached target network.target - Network. Dec 16 12:57:56.568690 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:57:56.568787 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:57:56.570136 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:57:56.571755 systemd-networkd[1477]: eth0: Link UP Dec 16 12:57:56.572408 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:57:56.573618 systemd-networkd[1477]: eth0: Gained carrier Dec 16 12:57:56.573681 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:57:56.578604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:56.583964 kernel: EDAC MC: Ver: 3.0.0 Dec 16 12:57:56.585089 systemd-networkd[1477]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:57:56.601262 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:57:56.654494 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:57:56.656791 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:57:56.658678 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:57:57.820484 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:57:57.820549 systemd-resolved[1388]: Clock change detected. Flushing caches. Dec 16 12:57:57.822578 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 12:57:57.822596 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:57:57.824530 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:57:57.824540 systemd-timesyncd[1518]: Initial clock synchronization to Tue 2025-12-16 12:57:57.820447 UTC. Dec 16 12:57:57.826604 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:57:57.826638 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:57:57.828110 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:57:57.829973 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:57:57.831877 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:57:57.833993 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:57:57.836570 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:57:57.840027 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:57:57.843723 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:57:57.845916 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:57:57.847936 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:57:57.859318 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:57:57.861592 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:57:57.864452 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:57:57.867147 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:57:57.868865 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:57:57.870418 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:57:57.870445 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:57:57.871450 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:57:57.874103 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:57:57.876604 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:57:57.884521 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:57:57.887212 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:57:57.890052 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:57:57.892602 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 12:57:57.894249 jq[1547]: false Dec 16 12:57:57.894794 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:57:57.896416 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:57:57.900685 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:57:57.904435 extend-filesystems[1548]: Found /dev/vda6 Dec 16 12:57:57.904767 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:57:57.907195 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Refreshing passwd entry cache Dec 16 12:57:57.906687 oslogin_cache_refresh[1549]: Refreshing passwd entry cache Dec 16 12:57:57.909714 extend-filesystems[1548]: Found /dev/vda9 Dec 16 12:57:57.911218 extend-filesystems[1548]: Checking size of /dev/vda9 Dec 16 12:57:57.913446 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Failure getting users, quitting Dec 16 12:57:57.913446 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:57:57.913427 oslogin_cache_refresh[1549]: Failure getting users, quitting Dec 16 12:57:57.914399 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Refreshing group entry cache Dec 16 12:57:57.913447 oslogin_cache_refresh[1549]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:57:57.913587 oslogin_cache_refresh[1549]: Refreshing group entry cache Dec 16 12:57:57.915043 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:57:57.917843 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:57:57.920575 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Failure getting groups, quitting Dec 16 12:57:57.920575 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:57:57.918430 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:57:57.918014 oslogin_cache_refresh[1549]: Failure getting groups, quitting Dec 16 12:57:57.918025 oslogin_cache_refresh[1549]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:57:57.922845 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:57:57.924995 extend-filesystems[1548]: Resized partition /dev/vda9 Dec 16 12:57:57.925729 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:57:57.931312 extend-filesystems[1567]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:57:57.933180 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:57:57.936418 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:57:57.936742 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:57:57.937074 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 12:57:57.937607 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 12:57:57.939989 jq[1566]: true Dec 16 12:57:57.939995 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:57:57.940826 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:57:57.944460 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 12:57:57.947939 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:57:57.948213 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:57:57.985433 jq[1577]: true Dec 16 12:57:57.988364 update_engine[1563]: I20251216 12:57:57.988273 1563 main.cc:92] Flatcar Update Engine starting Dec 16 12:57:58.064937 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 12:57:58.068642 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:57:58.068642 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:57:58.068642 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 12:57:58.077450 extend-filesystems[1548]: Resized filesystem in /dev/vda9 Dec 16 12:57:58.069959 (ntainerd)[1589]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:57:58.080707 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:57:58.081021 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:57:58.089858 tar[1575]: linux-amd64/LICENSE Dec 16 12:57:58.090139 tar[1575]: linux-amd64/helm Dec 16 12:57:58.111110 systemd-logind[1561]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 12:57:58.112129 dbus-daemon[1545]: [system] SELinux support is enabled Dec 16 12:57:58.111144 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 12:57:58.111614 systemd-logind[1561]: New seat seat0. Dec 16 12:57:58.112795 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:57:58.120719 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:57:58.124674 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:57:58.124707 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:57:58.127073 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:57:58.127092 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:57:58.136482 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:57:58.136152 dbus-daemon[1545]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:57:58.136739 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:57:58.140721 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:57:58.141944 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:57:58.144924 update_engine[1563]: I20251216 12:57:58.143713 1563 update_check_scheduler.cc:74] Next update check in 6m39s Dec 16 12:57:58.147714 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:57:58.197420 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:57:58.264362 containerd[1589]: time="2025-12-16T12:57:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:57:58.266510 containerd[1589]: time="2025-12-16T12:57:58.265034051Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:57:58.277801 containerd[1589]: time="2025-12-16T12:57:58.277775711Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.055µs" Dec 16 12:57:58.277867 containerd[1589]: time="2025-12-16T12:57:58.277854138Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:57:58.277935 containerd[1589]: time="2025-12-16T12:57:58.277923088Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:57:58.278132 containerd[1589]: time="2025-12-16T12:57:58.278116871Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:57:58.278186 containerd[1589]: time="2025-12-16T12:57:58.278175651Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:57:58.278253 containerd[1589]: time="2025-12-16T12:57:58.278231566Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:57:58.278365 containerd[1589]: time="2025-12-16T12:57:58.278348906Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:57:58.278420 containerd[1589]: time="2025-12-16T12:57:58.278408468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:57:58.278719 containerd[1589]: time="2025-12-16T12:57:58.278700004Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:57:58.278770 containerd[1589]: time="2025-12-16T12:57:58.278759506Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:57:58.278824 containerd[1589]: time="2025-12-16T12:57:58.278812455Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:57:58.278866 containerd[1589]: time="2025-12-16T12:57:58.278856818Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:57:58.278999 containerd[1589]: time="2025-12-16T12:57:58.278985309Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:57:58.279294 containerd[1589]: time="2025-12-16T12:57:58.279276656Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:57:58.279383 containerd[1589]: time="2025-12-16T12:57:58.279368919Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:57:58.279426 containerd[1589]: time="2025-12-16T12:57:58.279416488Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:57:58.279511 containerd[1589]: time="2025-12-16T12:57:58.279482041Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:57:58.279884 containerd[1589]: time="2025-12-16T12:57:58.279859568Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:57:58.279993 containerd[1589]: time="2025-12-16T12:57:58.279979704Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:57:58.287435 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:57:58.300562 containerd[1589]: time="2025-12-16T12:57:58.300481975Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:57:58.300646 containerd[1589]: time="2025-12-16T12:57:58.300598865Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:57:58.300646 containerd[1589]: time="2025-12-16T12:57:58.300618672Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:57:58.300646 containerd[1589]: time="2025-12-16T12:57:58.300631065Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:57:58.300646 containerd[1589]: time="2025-12-16T12:57:58.300643148Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:57:58.300737 containerd[1589]: time="2025-12-16T12:57:58.300653036Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:57:58.300737 containerd[1589]: time="2025-12-16T12:57:58.300664888Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:57:58.300737 containerd[1589]: time="2025-12-16T12:57:58.300675889Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:57:58.300737 containerd[1589]: time="2025-12-16T12:57:58.300709212Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:57:58.300737 containerd[1589]: time="2025-12-16T12:57:58.300720082Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:57:58.300737 containerd[1589]: time="2025-12-16T12:57:58.300729480Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:57:58.300846 containerd[1589]: time="2025-12-16T12:57:58.300764215Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:57:58.300953 containerd[1589]: time="2025-12-16T12:57:58.300926950Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:57:58.300979 containerd[1589]: time="2025-12-16T12:57:58.300960523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:57:58.300999 containerd[1589]: time="2025-12-16T12:57:58.300983366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:57:58.301019 containerd[1589]: time="2025-12-16T12:57:58.301002441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:57:58.301039 containerd[1589]: time="2025-12-16T12:57:58.301016127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:57:58.301039 containerd[1589]: time="2025-12-16T12:57:58.301029272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:57:58.301130 containerd[1589]: time="2025-12-16T12:57:58.301052555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:57:58.301130 containerd[1589]: time="2025-12-16T12:57:58.301067694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:57:58.301130 containerd[1589]: time="2025-12-16T12:57:58.301081019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:57:58.301130 containerd[1589]: time="2025-12-16T12:57:58.301093873Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:57:58.301130 containerd[1589]: time="2025-12-16T12:57:58.301108440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:57:58.301259 containerd[1589]: time="2025-12-16T12:57:58.301170577Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:57:58.301259 containerd[1589]: time="2025-12-16T12:57:58.301187699Z" level=info msg="Start snapshots syncer" Dec 16 12:57:58.301259 containerd[1589]: time="2025-12-16T12:57:58.301233595Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:57:58.301627 containerd[1589]: time="2025-12-16T12:57:58.301573161Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:57:58.301750 containerd[1589]: time="2025-12-16T12:57:58.301648393Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:57:58.301750 containerd[1589]: time="2025-12-16T12:57:58.301709146Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:57:58.301855 containerd[1589]: time="2025-12-16T12:57:58.301829783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:57:58.301881 containerd[1589]: time="2025-12-16T12:57:58.301860951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:57:58.301881 containerd[1589]: time="2025-12-16T12:57:58.301872453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:57:58.301918 containerd[1589]: time="2025-12-16T12:57:58.301881269Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:57:58.301918 containerd[1589]: time="2025-12-16T12:57:58.301901477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:57:58.301918 containerd[1589]: time="2025-12-16T12:57:58.301911857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:57:58.301980 containerd[1589]: time="2025-12-16T12:57:58.301922527Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:57:58.301980 containerd[1589]: time="2025-12-16T12:57:58.301942895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:57:58.301980 containerd[1589]: time="2025-12-16T12:57:58.301952362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:57:58.301980 containerd[1589]: time="2025-12-16T12:57:58.301961950Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:57:58.302050 containerd[1589]: time="2025-12-16T12:57:58.301995063Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:57:58.302050 containerd[1589]: time="2025-12-16T12:57:58.302025640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:57:58.302050 containerd[1589]: time="2025-12-16T12:57:58.302033705Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:57:58.302110 containerd[1589]: time="2025-12-16T12:57:58.302050456Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:57:58.302110 containerd[1589]: time="2025-12-16T12:57:58.302058842Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:57:58.302110 containerd[1589]: time="2025-12-16T12:57:58.302067829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:57:58.302110 containerd[1589]: time="2025-12-16T12:57:58.302083037Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:57:58.302110 containerd[1589]: time="2025-12-16T12:57:58.302099859Z" level=info msg="runtime interface created" Dec 16 12:57:58.302205 containerd[1589]: time="2025-12-16T12:57:58.302119416Z" level=info msg="created NRI interface" Dec 16 12:57:58.302205 containerd[1589]: time="2025-12-16T12:57:58.302132350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:57:58.302205 containerd[1589]: time="2025-12-16T12:57:58.302142709Z" level=info msg="Connect containerd service" Dec 16 12:57:58.302205 containerd[1589]: time="2025-12-16T12:57:58.302159821Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:57:58.306951 containerd[1589]: time="2025-12-16T12:57:58.306922251Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:57:58.313385 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:57:58.321777 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:57:58.346471 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:57:58.346775 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:57:58.351712 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:57:58.378707 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:57:58.385376 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:57:58.389292 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 12:57:58.391657 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.401778116Z" level=info msg="Start subscribing containerd event" Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.401842737Z" level=info msg="Start recovering state" Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.401920733Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.401930451Z" level=info msg="Start event monitor" Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.401964395Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.401969765Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.401972761Z" level=info msg="Start streaming server" Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.402033234Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.402041900Z" level=info msg="runtime interface starting up..." Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.402048723Z" level=info msg="starting plugins..." Dec 16 12:57:58.402513 containerd[1589]: time="2025-12-16T12:57:58.402064152Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:57:58.402288 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:57:58.403330 containerd[1589]: time="2025-12-16T12:57:58.402809900Z" level=info msg="containerd successfully booted in 0.138978s" Dec 16 12:57:58.405287 tar[1575]: linux-amd64/README.md Dec 16 12:57:58.442878 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:57:59.382777 systemd-networkd[1477]: eth0: Gained IPv6LL Dec 16 12:57:59.386808 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:57:59.389749 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:57:59.394270 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:57:59.398599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:59.403137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:57:59.434762 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:57:59.437614 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:57:59.437876 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:57:59.441146 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:58:00.179255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:58:00.181811 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:58:00.183813 systemd[1]: Startup finished in 3.132s (kernel) + 5.764s (initrd) + 5.662s (userspace) = 14.559s. Dec 16 12:58:00.184636 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:58:00.618913 kubelet[1678]: E1216 12:58:00.618776 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:58:00.622924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:58:00.623148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:58:00.623568 systemd[1]: kubelet.service: Consumed 1.023s CPU time, 269.9M memory peak. Dec 16 12:58:01.922765 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:58:01.924021 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:55492.service - OpenSSH per-connection server daemon (10.0.0.1:55492). Dec 16 12:58:02.002514 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 55492 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:58:02.004626 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:02.011783 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:58:02.012991 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:58:02.020862 systemd-logind[1561]: New session 1 of user core. Dec 16 12:58:02.038045 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:58:02.041468 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:58:02.058288 (systemd)[1696]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:58:02.061365 systemd-logind[1561]: New session c1 of user core. Dec 16 12:58:02.213337 systemd[1696]: Queued start job for default target default.target. Dec 16 12:58:02.228795 systemd[1696]: Created slice app.slice - User Application Slice. Dec 16 12:58:02.228822 systemd[1696]: Reached target paths.target - Paths. Dec 16 12:58:02.228865 systemd[1696]: Reached target timers.target - Timers. Dec 16 12:58:02.230454 systemd[1696]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:58:02.243768 systemd[1696]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:58:02.243927 systemd[1696]: Reached target sockets.target - Sockets. Dec 16 12:58:02.243978 systemd[1696]: Reached target basic.target - Basic System. Dec 16 12:58:02.244031 systemd[1696]: Reached target default.target - Main User Target. Dec 16 12:58:02.244087 systemd[1696]: Startup finished in 175ms. Dec 16 12:58:02.244121 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:58:02.245614 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:58:02.311940 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:55496.service - OpenSSH per-connection server daemon (10.0.0.1:55496). Dec 16 12:58:02.374832 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 55496 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:58:02.376467 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:02.381191 systemd-logind[1561]: New session 2 of user core. Dec 16 12:58:02.404808 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:58:02.458791 sshd[1710]: Connection closed by 10.0.0.1 port 55496 Dec 16 12:58:02.459137 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:02.475676 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:55496.service: Deactivated successfully. Dec 16 12:58:02.477519 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:58:02.478307 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:58:02.480962 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:55502.service - OpenSSH per-connection server daemon (10.0.0.1:55502). Dec 16 12:58:02.481570 systemd-logind[1561]: Removed session 2. Dec 16 12:58:02.551828 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 55502 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:58:02.553550 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:02.557763 systemd-logind[1561]: New session 3 of user core. Dec 16 12:58:02.567633 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:58:02.617293 sshd[1719]: Connection closed by 10.0.0.1 port 55502 Dec 16 12:58:02.617716 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:02.634199 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:55502.service: Deactivated successfully. Dec 16 12:58:02.636085 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:58:02.636831 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:58:02.639366 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:55504.service - OpenSSH per-connection server daemon (10.0.0.1:55504). Dec 16 12:58:02.640268 systemd-logind[1561]: Removed session 3. Dec 16 12:58:02.684871 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 55504 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:58:02.686569 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:02.691462 systemd-logind[1561]: New session 4 of user core. Dec 16 12:58:02.700644 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:58:02.755592 sshd[1728]: Connection closed by 10.0.0.1 port 55504 Dec 16 12:58:02.755863 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:02.767171 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:55504.service: Deactivated successfully. Dec 16 12:58:02.769043 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:58:02.769770 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:58:02.772785 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:55508.service - OpenSSH per-connection server daemon (10.0.0.1:55508). Dec 16 12:58:02.773381 systemd-logind[1561]: Removed session 4. Dec 16 12:58:02.834102 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 55508 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:58:02.835929 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:02.840901 systemd-logind[1561]: New session 5 of user core. Dec 16 12:58:02.851667 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:58:02.915549 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:58:02.915962 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:58:02.936537 sudo[1738]: pam_unix(sudo:session): session closed for user root Dec 16 12:58:02.938721 sshd[1737]: Connection closed by 10.0.0.1 port 55508 Dec 16 12:58:02.939252 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:02.953447 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:55508.service: Deactivated successfully. Dec 16 12:58:02.955790 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:58:02.956720 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:58:02.960447 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:55524.service - OpenSSH per-connection server daemon (10.0.0.1:55524). Dec 16 12:58:02.961226 systemd-logind[1561]: Removed session 5. Dec 16 12:58:03.021567 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 55524 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:58:03.023307 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:03.028457 systemd-logind[1561]: New session 6 of user core. Dec 16 12:58:03.041705 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:58:03.096479 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:58:03.096847 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:58:03.105227 sudo[1750]: pam_unix(sudo:session): session closed for user root Dec 16 12:58:03.111715 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:58:03.112020 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:58:03.122518 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:58:03.166664 augenrules[1772]: No rules Dec 16 12:58:03.168383 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:58:03.168720 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:58:03.169960 sudo[1749]: pam_unix(sudo:session): session closed for user root Dec 16 12:58:03.171503 sshd[1748]: Connection closed by 10.0.0.1 port 55524 Dec 16 12:58:03.171810 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:03.184161 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:55524.service: Deactivated successfully. Dec 16 12:58:03.185928 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:58:03.186825 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:58:03.189281 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:55530.service - OpenSSH per-connection server daemon (10.0.0.1:55530). Dec 16 12:58:03.189931 systemd-logind[1561]: Removed session 6. Dec 16 12:58:03.239134 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 55530 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:58:03.240786 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:03.245745 systemd-logind[1561]: New session 7 of user core. Dec 16 12:58:03.256655 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:58:03.311374 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:58:03.311711 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:58:03.625224 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:58:03.646856 (dockerd)[1805]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:58:03.880444 dockerd[1805]: time="2025-12-16T12:58:03.880309360Z" level=info msg="Starting up" Dec 16 12:58:03.881100 dockerd[1805]: time="2025-12-16T12:58:03.881051342Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:58:03.894316 dockerd[1805]: time="2025-12-16T12:58:03.894258214Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:58:04.477105 dockerd[1805]: time="2025-12-16T12:58:04.477019792Z" level=info msg="Loading containers: start." Dec 16 12:58:04.488543 kernel: Initializing XFRM netlink socket Dec 16 12:58:05.702848 systemd-networkd[1477]: docker0: Link UP Dec 16 12:58:05.929142 dockerd[1805]: time="2025-12-16T12:58:05.929073022Z" level=info msg="Loading containers: done." Dec 16 12:58:05.947115 dockerd[1805]: time="2025-12-16T12:58:05.947033548Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:58:05.947327 dockerd[1805]: time="2025-12-16T12:58:05.947153022Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:58:05.947327 dockerd[1805]: time="2025-12-16T12:58:05.947252037Z" level=info msg="Initializing buildkit" Dec 16 12:58:06.241813 dockerd[1805]: time="2025-12-16T12:58:06.241742226Z" level=info msg="Completed buildkit initialization" Dec 16 12:58:06.247123 dockerd[1805]: time="2025-12-16T12:58:06.247067090Z" level=info msg="Daemon has completed initialization" Dec 16 12:58:06.248087 dockerd[1805]: time="2025-12-16T12:58:06.247480736Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:58:06.247587 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:58:06.936022 containerd[1589]: time="2025-12-16T12:58:06.935958331Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:58:08.202220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172663847.mount: Deactivated successfully. Dec 16 12:58:09.143840 containerd[1589]: time="2025-12-16T12:58:09.143759077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:09.146796 containerd[1589]: time="2025-12-16T12:58:09.146746589Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Dec 16 12:58:09.148267 containerd[1589]: time="2025-12-16T12:58:09.148224089Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:09.151303 containerd[1589]: time="2025-12-16T12:58:09.151240956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:09.152329 containerd[1589]: time="2025-12-16T12:58:09.152285113Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.216276939s" Dec 16 12:58:09.152329 containerd[1589]: time="2025-12-16T12:58:09.152324778Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 12:58:09.152915 containerd[1589]: time="2025-12-16T12:58:09.152859360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:58:10.873545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:58:10.876577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:58:11.094364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:58:11.099195 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:58:11.286581 kubelet[2093]: E1216 12:58:11.286417 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:58:11.293518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:58:11.293714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:58:11.294091 systemd[1]: kubelet.service: Consumed 238ms CPU time, 110.4M memory peak. Dec 16 12:58:11.308194 containerd[1589]: time="2025-12-16T12:58:11.308130543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:11.309121 containerd[1589]: time="2025-12-16T12:58:11.309054475Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Dec 16 12:58:11.310352 containerd[1589]: time="2025-12-16T12:58:11.310306774Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:11.313266 containerd[1589]: time="2025-12-16T12:58:11.313224805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:11.314572 containerd[1589]: time="2025-12-16T12:58:11.314542395Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.161641227s" Dec 16 12:58:11.314649 containerd[1589]: time="2025-12-16T12:58:11.314591497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 12:58:11.315170 containerd[1589]: time="2025-12-16T12:58:11.315137471Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:58:12.759057 containerd[1589]: time="2025-12-16T12:58:12.758990185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:12.759737 containerd[1589]: time="2025-12-16T12:58:12.759718340Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Dec 16 12:58:12.760934 containerd[1589]: time="2025-12-16T12:58:12.760882052Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:12.764213 containerd[1589]: time="2025-12-16T12:58:12.764185195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:12.765140 containerd[1589]: time="2025-12-16T12:58:12.765099780Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.449926742s" Dec 16 12:58:12.765140 containerd[1589]: time="2025-12-16T12:58:12.765137521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 12:58:12.765678 containerd[1589]: time="2025-12-16T12:58:12.765643840Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:58:13.625822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360903804.mount: Deactivated successfully. Dec 16 12:58:14.279778 containerd[1589]: time="2025-12-16T12:58:14.279729019Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Dec 16 12:58:14.280196 containerd[1589]: time="2025-12-16T12:58:14.279775517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:14.281808 containerd[1589]: time="2025-12-16T12:58:14.281749147Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:14.285163 containerd[1589]: time="2025-12-16T12:58:14.285069703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:14.285656 containerd[1589]: time="2025-12-16T12:58:14.285612551Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.519935568s" Dec 16 12:58:14.285656 containerd[1589]: time="2025-12-16T12:58:14.285651204Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 12:58:14.286415 containerd[1589]: time="2025-12-16T12:58:14.286129741Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:58:14.760421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236399422.mount: Deactivated successfully. Dec 16 12:58:15.457519 containerd[1589]: time="2025-12-16T12:58:15.457427714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:15.458859 containerd[1589]: time="2025-12-16T12:58:15.458760603Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Dec 16 12:58:15.460521 containerd[1589]: time="2025-12-16T12:58:15.460460210Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:15.463782 containerd[1589]: time="2025-12-16T12:58:15.463739268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:15.465230 containerd[1589]: time="2025-12-16T12:58:15.465174149Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.179012218s" Dec 16 12:58:15.465230 containerd[1589]: time="2025-12-16T12:58:15.465206039Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 12:58:15.465680 containerd[1589]: time="2025-12-16T12:58:15.465647296Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:58:15.918985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189201483.mount: Deactivated successfully. Dec 16 12:58:15.926473 containerd[1589]: time="2025-12-16T12:58:15.926428440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:58:15.927303 containerd[1589]: time="2025-12-16T12:58:15.927263697Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 12:58:15.928618 containerd[1589]: time="2025-12-16T12:58:15.928567772Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:58:15.930876 containerd[1589]: time="2025-12-16T12:58:15.930838980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:58:15.933444 containerd[1589]: time="2025-12-16T12:58:15.931883769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 466.199283ms" Dec 16 12:58:15.933444 containerd[1589]: time="2025-12-16T12:58:15.931940105Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 12:58:15.933788 containerd[1589]: time="2025-12-16T12:58:15.933757312Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:58:16.549274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989291189.mount: Deactivated successfully. Dec 16 12:58:18.250045 containerd[1589]: time="2025-12-16T12:58:18.249985770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:18.250851 containerd[1589]: time="2025-12-16T12:58:18.250785069Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Dec 16 12:58:18.252167 containerd[1589]: time="2025-12-16T12:58:18.252112988Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:18.254877 containerd[1589]: time="2025-12-16T12:58:18.254828841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:18.255737 containerd[1589]: time="2025-12-16T12:58:18.255703160Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.321917635s" Dec 16 12:58:18.255737 containerd[1589]: time="2025-12-16T12:58:18.255729439Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 12:58:21.360155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:58:21.361801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:58:21.591241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:58:21.596255 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:58:21.639520 kubelet[2256]: E1216 12:58:21.638005 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:58:21.642774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:58:21.642982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:58:21.643358 systemd[1]: kubelet.service: Consumed 227ms CPU time, 110.6M memory peak. Dec 16 12:58:21.744643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:58:21.744833 systemd[1]: kubelet.service: Consumed 227ms CPU time, 110.6M memory peak. Dec 16 12:58:21.746968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:58:21.774093 systemd[1]: Reload requested from client PID 2272 ('systemctl') (unit session-7.scope)... Dec 16 12:58:21.774112 systemd[1]: Reloading... Dec 16 12:58:21.874586 zram_generator::config[2321]: No configuration found. Dec 16 12:58:22.520854 systemd[1]: Reloading finished in 746 ms. Dec 16 12:58:22.582391 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:58:22.582545 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:58:22.582954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:58:22.583020 systemd[1]: kubelet.service: Consumed 174ms CPU time, 98.2M memory peak. Dec 16 12:58:22.585164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:58:22.769871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:58:22.782996 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:58:22.816968 kubelet[2363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:58:22.816968 kubelet[2363]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:58:22.816968 kubelet[2363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:58:22.817361 kubelet[2363]: I1216 12:58:22.817022 2363 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:58:23.156370 kubelet[2363]: I1216 12:58:23.156316 2363 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:58:23.156370 kubelet[2363]: I1216 12:58:23.156353 2363 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:58:23.156671 kubelet[2363]: I1216 12:58:23.156648 2363 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:58:23.181307 kubelet[2363]: E1216 12:58:23.181253 2363 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:58:23.181543 kubelet[2363]: I1216 12:58:23.181526 2363 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:58:23.189233 kubelet[2363]: I1216 12:58:23.189204 2363 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:58:23.195515 kubelet[2363]: I1216 12:58:23.195466 2363 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:58:23.195768 kubelet[2363]: I1216 12:58:23.195724 2363 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:58:23.195924 kubelet[2363]: I1216 12:58:23.195755 2363 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:58:23.195924 kubelet[2363]: I1216 12:58:23.195921 2363 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:58:23.196062 kubelet[2363]: I1216 12:58:23.195930 2363 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:58:23.196746 kubelet[2363]: I1216 12:58:23.196713 2363 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:58:23.198653 kubelet[2363]: I1216 12:58:23.198589 2363 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:58:23.198653 kubelet[2363]: I1216 12:58:23.198629 2363 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:58:23.198825 kubelet[2363]: I1216 12:58:23.198679 2363 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:58:23.201839 kubelet[2363]: I1216 12:58:23.201475 2363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:58:23.206862 kubelet[2363]: E1216 12:58:23.206826 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:58:23.209208 kubelet[2363]: I1216 12:58:23.208433 2363 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:58:23.209208 kubelet[2363]: I1216 12:58:23.209149 2363 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:58:23.209360 kubelet[2363]: E1216 12:58:23.209182 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:58:23.210334 kubelet[2363]: W1216 12:58:23.210313 2363 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:58:23.213912 kubelet[2363]: I1216 12:58:23.213870 2363 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:58:23.214055 kubelet[2363]: I1216 12:58:23.213936 2363 server.go:1289] "Started kubelet" Dec 16 12:58:23.216245 kubelet[2363]: I1216 12:58:23.216172 2363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:58:23.216534 kubelet[2363]: I1216 12:58:23.216472 2363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:58:23.217474 kubelet[2363]: I1216 12:58:23.216538 2363 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:58:23.217773 kubelet[2363]: I1216 12:58:23.217737 2363 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:58:23.219510 kubelet[2363]: I1216 12:58:23.218768 2363 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:58:23.219510 kubelet[2363]: I1216 12:58:23.219268 2363 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:58:23.220810 kubelet[2363]: E1216 12:58:23.219802 2363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b382c9836295 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:58:23.213896341 +0000 UTC m=+0.426651803,LastTimestamp:2025-12-16 12:58:23.213896341 +0000 UTC m=+0.426651803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:58:23.221263 kubelet[2363]: E1216 12:58:23.220953 2363 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:58:23.221263 kubelet[2363]: E1216 12:58:23.221202 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:23.221263 kubelet[2363]: I1216 12:58:23.221252 2363 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:58:23.221385 kubelet[2363]: I1216 12:58:23.221365 2363 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:58:23.221594 kubelet[2363]: I1216 12:58:23.221577 2363 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:58:23.222542 kubelet[2363]: E1216 12:58:23.221936 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Dec 16 12:58:23.222542 kubelet[2363]: E1216 12:58:23.222367 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:58:23.222542 kubelet[2363]: I1216 12:58:23.222428 2363 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:58:23.222762 kubelet[2363]: I1216 12:58:23.222616 2363 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:58:23.224052 kubelet[2363]: I1216 12:58:23.224029 2363 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:58:23.239155 kubelet[2363]: I1216 12:58:23.239125 2363 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:58:23.239155 kubelet[2363]: I1216 12:58:23.239146 2363 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:58:23.239217 kubelet[2363]: I1216 12:58:23.239162 2363 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:58:23.242057 kubelet[2363]: I1216 12:58:23.241962 2363 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:58:23.243893 kubelet[2363]: I1216 12:58:23.243867 2363 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:58:23.243949 kubelet[2363]: I1216 12:58:23.243897 2363 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:58:23.243949 kubelet[2363]: I1216 12:58:23.243921 2363 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:58:23.243949 kubelet[2363]: I1216 12:58:23.243929 2363 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:58:23.244064 kubelet[2363]: E1216 12:58:23.243967 2363 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:58:23.244601 kubelet[2363]: E1216 12:58:23.244531 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:58:23.321349 kubelet[2363]: E1216 12:58:23.321302 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:23.344669 kubelet[2363]: E1216 12:58:23.344614 2363 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:58:23.421919 kubelet[2363]: E1216 12:58:23.421803 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:23.423371 kubelet[2363]: E1216 12:58:23.423333 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Dec 16 12:58:23.522514 kubelet[2363]: E1216 12:58:23.522458 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:23.545771 kubelet[2363]: E1216 12:58:23.545701 2363 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:58:23.623264 kubelet[2363]: E1216 12:58:23.623174 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:23.724422 kubelet[2363]: E1216 12:58:23.724242 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:23.824326 kubelet[2363]: E1216 12:58:23.824252 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Dec 16 12:58:23.824798 kubelet[2363]: E1216 12:58:23.824381 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:23.924998 kubelet[2363]: E1216 12:58:23.924889 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:23.946255 kubelet[2363]: E1216 12:58:23.946153 2363 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:58:24.014931 kubelet[2363]: E1216 12:58:24.014792 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:58:24.025459 kubelet[2363]: E1216 12:58:24.025407 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:24.126227 kubelet[2363]: E1216 12:58:24.126141 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:24.226946 kubelet[2363]: E1216 12:58:24.226882 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:24.295587 kubelet[2363]: E1216 12:58:24.295456 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:58:24.327249 kubelet[2363]: E1216 12:58:24.327157 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:24.370727 kubelet[2363]: I1216 12:58:24.370676 2363 policy_none.go:49] "None policy: Start" Dec 16 12:58:24.370727 kubelet[2363]: I1216 12:58:24.370713 2363 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:58:24.370727 kubelet[2363]: I1216 12:58:24.370729 2363 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:58:24.380066 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:58:24.394899 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:58:24.407373 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:58:24.409065 kubelet[2363]: E1216 12:58:24.408924 2363 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:58:24.409227 kubelet[2363]: I1216 12:58:24.409213 2363 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:58:24.409265 kubelet[2363]: I1216 12:58:24.409230 2363 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:58:24.409711 kubelet[2363]: I1216 12:58:24.409631 2363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:58:24.410553 kubelet[2363]: E1216 12:58:24.410536 2363 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:58:24.410663 kubelet[2363]: E1216 12:58:24.410645 2363 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:58:24.472995 kubelet[2363]: E1216 12:58:24.472938 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:58:24.511689 kubelet[2363]: I1216 12:58:24.511653 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:58:24.512115 kubelet[2363]: E1216 12:58:24.512064 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 16 12:58:24.625347 kubelet[2363]: E1216 12:58:24.625296 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="1.6s" Dec 16 12:58:24.671082 kubelet[2363]: E1216 12:58:24.671033 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:58:24.714100 kubelet[2363]: I1216 12:58:24.714056 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:58:24.714515 kubelet[2363]: E1216 12:58:24.714453 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 16 12:58:24.828887 kubelet[2363]: I1216 12:58:24.828812 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c36b1fcebf9dcbe952e9ed56c3e2b6f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c36b1fcebf9dcbe952e9ed56c3e2b6f7\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:24.828887 kubelet[2363]: I1216 12:58:24.828881 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c36b1fcebf9dcbe952e9ed56c3e2b6f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c36b1fcebf9dcbe952e9ed56c3e2b6f7\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:24.829265 kubelet[2363]: I1216 12:58:24.828906 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c36b1fcebf9dcbe952e9ed56c3e2b6f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c36b1fcebf9dcbe952e9ed56c3e2b6f7\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:24.875596 systemd[1]: Created slice kubepods-burstable-podc36b1fcebf9dcbe952e9ed56c3e2b6f7.slice - libcontainer container kubepods-burstable-podc36b1fcebf9dcbe952e9ed56c3e2b6f7.slice. Dec 16 12:58:24.894437 kubelet[2363]: E1216 12:58:24.894403 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:58:24.929733 kubelet[2363]: I1216 12:58:24.929701 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:24.929817 kubelet[2363]: I1216 12:58:24.929772 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:24.929817 kubelet[2363]: I1216 12:58:24.929790 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:24.929817 kubelet[2363]: I1216 12:58:24.929805 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:24.929878 kubelet[2363]: I1216 12:58:24.929819 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:24.972209 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 16 12:58:24.973873 kubelet[2363]: E1216 12:58:24.973829 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:58:25.013358 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 16 12:58:25.015107 kubelet[2363]: E1216 12:58:25.015080 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:58:25.030523 kubelet[2363]: I1216 12:58:25.030488 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:25.115756 kubelet[2363]: I1216 12:58:25.115724 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:58:25.116081 kubelet[2363]: E1216 12:58:25.116043 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 16 12:58:25.196533 containerd[1589]: time="2025-12-16T12:58:25.196409633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c36b1fcebf9dcbe952e9ed56c3e2b6f7,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:25.216811 containerd[1589]: time="2025-12-16T12:58:25.216761543Z" level=info msg="connecting to shim f67b6b64a46fd4d08bffb8a363bc626ef5351e4b82793e786cdd788e99da6458" address="unix:///run/containerd/s/a49d3a006551d9d026440d1bc8692b6f11f447714ca360876278c5ab46d7334b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:25.246657 systemd[1]: Started cri-containerd-f67b6b64a46fd4d08bffb8a363bc626ef5351e4b82793e786cdd788e99da6458.scope - libcontainer container f67b6b64a46fd4d08bffb8a363bc626ef5351e4b82793e786cdd788e99da6458. Dec 16 12:58:25.270146 kubelet[2363]: E1216 12:58:25.270106 2363 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:58:25.275026 containerd[1589]: time="2025-12-16T12:58:25.274986715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:25.292423 containerd[1589]: time="2025-12-16T12:58:25.292377092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c36b1fcebf9dcbe952e9ed56c3e2b6f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f67b6b64a46fd4d08bffb8a363bc626ef5351e4b82793e786cdd788e99da6458\"" Dec 16 12:58:25.298742 containerd[1589]: time="2025-12-16T12:58:25.298700648Z" level=info msg="CreateContainer within sandbox \"f67b6b64a46fd4d08bffb8a363bc626ef5351e4b82793e786cdd788e99da6458\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:58:25.307929 containerd[1589]: time="2025-12-16T12:58:25.307877595Z" level=info msg="connecting to shim 72007114fa4134113363e19544ee0a3743c8fc5544479547ea28bed2652752df" address="unix:///run/containerd/s/651e41d319b2367162c13b1431a3a3d9435d7a1a6a89320f902e77ff3e959506" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:25.310615 containerd[1589]: time="2025-12-16T12:58:25.310577307Z" level=info msg="Container cddd992ba1b29e4e4a9621f1e73acc00637e5231870b5c1cb6a98ed91db799c7: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:25.313613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050380159.mount: Deactivated successfully. Dec 16 12:58:25.316557 containerd[1589]: time="2025-12-16T12:58:25.316517936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:25.322403 containerd[1589]: time="2025-12-16T12:58:25.322366392Z" level=info msg="CreateContainer within sandbox \"f67b6b64a46fd4d08bffb8a363bc626ef5351e4b82793e786cdd788e99da6458\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cddd992ba1b29e4e4a9621f1e73acc00637e5231870b5c1cb6a98ed91db799c7\"" Dec 16 12:58:25.323514 containerd[1589]: time="2025-12-16T12:58:25.323024476Z" level=info msg="StartContainer for \"cddd992ba1b29e4e4a9621f1e73acc00637e5231870b5c1cb6a98ed91db799c7\"" Dec 16 12:58:25.324033 containerd[1589]: time="2025-12-16T12:58:25.324013590Z" level=info msg="connecting to shim cddd992ba1b29e4e4a9621f1e73acc00637e5231870b5c1cb6a98ed91db799c7" address="unix:///run/containerd/s/a49d3a006551d9d026440d1bc8692b6f11f447714ca360876278c5ab46d7334b" protocol=ttrpc version=3 Dec 16 12:58:25.336721 systemd[1]: Started cri-containerd-72007114fa4134113363e19544ee0a3743c8fc5544479547ea28bed2652752df.scope - libcontainer container 72007114fa4134113363e19544ee0a3743c8fc5544479547ea28bed2652752df. Dec 16 12:58:25.341417 systemd[1]: Started cri-containerd-cddd992ba1b29e4e4a9621f1e73acc00637e5231870b5c1cb6a98ed91db799c7.scope - libcontainer container cddd992ba1b29e4e4a9621f1e73acc00637e5231870b5c1cb6a98ed91db799c7. Dec 16 12:58:25.344401 containerd[1589]: time="2025-12-16T12:58:25.344361583Z" level=info msg="connecting to shim 19fa080dab83914005bcd3f07b6763105a9fd2e4e04b9f2ef5d7a9590232b85d" address="unix:///run/containerd/s/381ecc90ec0ab3d7efb0a8449a8d9160aee16b9dff1859cdf8d4621b81ff5395" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:25.376655 systemd[1]: Started cri-containerd-19fa080dab83914005bcd3f07b6763105a9fd2e4e04b9f2ef5d7a9590232b85d.scope - libcontainer container 19fa080dab83914005bcd3f07b6763105a9fd2e4e04b9f2ef5d7a9590232b85d. Dec 16 12:58:25.398468 containerd[1589]: time="2025-12-16T12:58:25.398430211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"72007114fa4134113363e19544ee0a3743c8fc5544479547ea28bed2652752df\"" Dec 16 12:58:25.399427 containerd[1589]: time="2025-12-16T12:58:25.399411862Z" level=info msg="StartContainer for \"cddd992ba1b29e4e4a9621f1e73acc00637e5231870b5c1cb6a98ed91db799c7\" returns successfully" Dec 16 12:58:25.404514 containerd[1589]: time="2025-12-16T12:58:25.404215028Z" level=info msg="CreateContainer within sandbox \"72007114fa4134113363e19544ee0a3743c8fc5544479547ea28bed2652752df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:58:25.414479 containerd[1589]: time="2025-12-16T12:58:25.414453264Z" level=info msg="Container 2816e5c25ca658c1fb83ad5f9c43d23a4ece6971a32623ae4aea9ef645c4ecfd: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:25.423238 containerd[1589]: time="2025-12-16T12:58:25.423180708Z" level=info msg="CreateContainer within sandbox \"72007114fa4134113363e19544ee0a3743c8fc5544479547ea28bed2652752df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2816e5c25ca658c1fb83ad5f9c43d23a4ece6971a32623ae4aea9ef645c4ecfd\"" Dec 16 12:58:25.424133 containerd[1589]: time="2025-12-16T12:58:25.424097738Z" level=info msg="StartContainer for \"2816e5c25ca658c1fb83ad5f9c43d23a4ece6971a32623ae4aea9ef645c4ecfd\"" Dec 16 12:58:25.428939 containerd[1589]: time="2025-12-16T12:58:25.428874925Z" level=info msg="connecting to shim 2816e5c25ca658c1fb83ad5f9c43d23a4ece6971a32623ae4aea9ef645c4ecfd" address="unix:///run/containerd/s/651e41d319b2367162c13b1431a3a3d9435d7a1a6a89320f902e77ff3e959506" protocol=ttrpc version=3 Dec 16 12:58:25.437091 containerd[1589]: time="2025-12-16T12:58:25.436944916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"19fa080dab83914005bcd3f07b6763105a9fd2e4e04b9f2ef5d7a9590232b85d\"" Dec 16 12:58:25.442876 containerd[1589]: time="2025-12-16T12:58:25.442839739Z" level=info msg="CreateContainer within sandbox \"19fa080dab83914005bcd3f07b6763105a9fd2e4e04b9f2ef5d7a9590232b85d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:58:25.449647 systemd[1]: Started cri-containerd-2816e5c25ca658c1fb83ad5f9c43d23a4ece6971a32623ae4aea9ef645c4ecfd.scope - libcontainer container 2816e5c25ca658c1fb83ad5f9c43d23a4ece6971a32623ae4aea9ef645c4ecfd. Dec 16 12:58:25.453978 containerd[1589]: time="2025-12-16T12:58:25.453935654Z" level=info msg="Container ec0cacaa8e70d7fc51d8194c2851bdaeb1978a242cf1793204573ea241194e9d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:25.462352 containerd[1589]: time="2025-12-16T12:58:25.462315015Z" level=info msg="CreateContainer within sandbox \"19fa080dab83914005bcd3f07b6763105a9fd2e4e04b9f2ef5d7a9590232b85d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ec0cacaa8e70d7fc51d8194c2851bdaeb1978a242cf1793204573ea241194e9d\"" Dec 16 12:58:25.464519 containerd[1589]: time="2025-12-16T12:58:25.463922730Z" level=info msg="StartContainer for \"ec0cacaa8e70d7fc51d8194c2851bdaeb1978a242cf1793204573ea241194e9d\"" Dec 16 12:58:25.464998 containerd[1589]: time="2025-12-16T12:58:25.464980102Z" level=info msg="connecting to shim ec0cacaa8e70d7fc51d8194c2851bdaeb1978a242cf1793204573ea241194e9d" address="unix:///run/containerd/s/381ecc90ec0ab3d7efb0a8449a8d9160aee16b9dff1859cdf8d4621b81ff5395" protocol=ttrpc version=3 Dec 16 12:58:25.485634 systemd[1]: Started cri-containerd-ec0cacaa8e70d7fc51d8194c2851bdaeb1978a242cf1793204573ea241194e9d.scope - libcontainer container ec0cacaa8e70d7fc51d8194c2851bdaeb1978a242cf1793204573ea241194e9d. Dec 16 12:58:25.517531 containerd[1589]: time="2025-12-16T12:58:25.517484598Z" level=info msg="StartContainer for \"2816e5c25ca658c1fb83ad5f9c43d23a4ece6971a32623ae4aea9ef645c4ecfd\" returns successfully" Dec 16 12:58:25.541287 containerd[1589]: time="2025-12-16T12:58:25.541237715Z" level=info msg="StartContainer for \"ec0cacaa8e70d7fc51d8194c2851bdaeb1978a242cf1793204573ea241194e9d\" returns successfully" Dec 16 12:58:25.917753 kubelet[2363]: I1216 12:58:25.917667 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:58:26.254108 kubelet[2363]: E1216 12:58:26.253983 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:58:26.257117 kubelet[2363]: E1216 12:58:26.257091 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:58:26.257366 kubelet[2363]: E1216 12:58:26.257345 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:58:27.146945 kubelet[2363]: E1216 12:58:27.146898 2363 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:58:27.194599 kubelet[2363]: E1216 12:58:27.194304 2363 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1881b382c9836295 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:58:23.213896341 +0000 UTC m=+0.426651803,LastTimestamp:2025-12-16 12:58:23.213896341 +0000 UTC m=+0.426651803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:58:27.208708 kubelet[2363]: I1216 12:58:27.208650 2363 apiserver.go:52] "Watching apiserver" Dec 16 12:58:27.222240 kubelet[2363]: I1216 12:58:27.222211 2363 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:58:27.249457 kubelet[2363]: I1216 12:58:27.248959 2363 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:58:27.258518 kubelet[2363]: I1216 12:58:27.258458 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:27.259258 kubelet[2363]: I1216 12:58:27.259222 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:27.267267 kubelet[2363]: E1216 12:58:27.267229 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:27.268461 kubelet[2363]: E1216 12:58:27.267580 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:27.322790 kubelet[2363]: I1216 12:58:27.322673 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:27.324721 kubelet[2363]: E1216 12:58:27.324683 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:27.324721 kubelet[2363]: I1216 12:58:27.324709 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:27.327609 kubelet[2363]: E1216 12:58:27.327570 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:27.327609 kubelet[2363]: I1216 12:58:27.327596 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:27.329521 kubelet[2363]: E1216 12:58:27.329447 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:29.218958 systemd[1]: Reload requested from client PID 2645 ('systemctl') (unit session-7.scope)... Dec 16 12:58:29.218972 systemd[1]: Reloading... Dec 16 12:58:29.303540 zram_generator::config[2694]: No configuration found. Dec 16 12:58:29.519808 systemd[1]: Reloading finished in 300 ms. Dec 16 12:58:29.551058 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:58:29.567737 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:58:29.568074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:58:29.568123 systemd[1]: kubelet.service: Consumed 957ms CPU time, 132.6M memory peak. Dec 16 12:58:29.569940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:58:29.819640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:58:29.830053 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:58:29.872995 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:58:29.872995 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:58:29.872995 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:58:29.873397 kubelet[2733]: I1216 12:58:29.873043 2733 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:58:29.880820 kubelet[2733]: I1216 12:58:29.880757 2733 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:58:29.880820 kubelet[2733]: I1216 12:58:29.880788 2733 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:58:29.881078 kubelet[2733]: I1216 12:58:29.881058 2733 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:58:29.882362 kubelet[2733]: I1216 12:58:29.882323 2733 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:58:29.886803 kubelet[2733]: I1216 12:58:29.886748 2733 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:58:29.890692 kubelet[2733]: I1216 12:58:29.890656 2733 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:58:29.896327 kubelet[2733]: I1216 12:58:29.896299 2733 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:58:29.896574 kubelet[2733]: I1216 12:58:29.896539 2733 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:58:29.896740 kubelet[2733]: I1216 12:58:29.896568 2733 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:58:29.896740 kubelet[2733]: I1216 12:58:29.896739 2733 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:58:29.896875 kubelet[2733]: I1216 12:58:29.896748 2733 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:58:29.896875 kubelet[2733]: I1216 12:58:29.896796 2733 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:58:29.896980 kubelet[2733]: I1216 12:58:29.896962 2733 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:58:29.896980 kubelet[2733]: I1216 12:58:29.896976 2733 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:58:29.897059 kubelet[2733]: I1216 12:58:29.896998 2733 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:58:29.897059 kubelet[2733]: I1216 12:58:29.897013 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:58:29.898262 kubelet[2733]: I1216 12:58:29.898233 2733 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:58:29.899527 kubelet[2733]: I1216 12:58:29.899143 2733 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:58:29.904062 kubelet[2733]: I1216 12:58:29.904007 2733 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:58:29.904225 kubelet[2733]: I1216 12:58:29.904191 2733 server.go:1289] "Started kubelet" Dec 16 12:58:29.904536 kubelet[2733]: I1216 12:58:29.904444 2733 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:58:29.904856 kubelet[2733]: I1216 12:58:29.904793 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:58:29.905662 kubelet[2733]: I1216 12:58:29.905643 2733 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:58:29.906550 kubelet[2733]: I1216 12:58:29.906474 2733 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:58:29.908039 kubelet[2733]: I1216 12:58:29.908014 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:58:29.910234 kubelet[2733]: I1216 12:58:29.910191 2733 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:58:29.915440 kubelet[2733]: E1216 12:58:29.915417 2733 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:29.915615 kubelet[2733]: I1216 12:58:29.915603 2733 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:58:29.916009 kubelet[2733]: I1216 12:58:29.915996 2733 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:58:29.916242 kubelet[2733]: I1216 12:58:29.916229 2733 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:58:29.917019 kubelet[2733]: E1216 12:58:29.916992 2733 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:58:29.920550 kubelet[2733]: I1216 12:58:29.920532 2733 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:58:29.920651 kubelet[2733]: I1216 12:58:29.920639 2733 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:58:29.920950 kubelet[2733]: I1216 12:58:29.920903 2733 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:58:29.924334 kubelet[2733]: I1216 12:58:29.924266 2733 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:58:29.925961 kubelet[2733]: I1216 12:58:29.925939 2733 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:58:29.925961 kubelet[2733]: I1216 12:58:29.925959 2733 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:58:29.926184 kubelet[2733]: I1216 12:58:29.925979 2733 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:58:29.926184 kubelet[2733]: I1216 12:58:29.925987 2733 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:58:29.926184 kubelet[2733]: E1216 12:58:29.926028 2733 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:58:29.965131 kubelet[2733]: I1216 12:58:29.965096 2733 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:58:29.965131 kubelet[2733]: I1216 12:58:29.965117 2733 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:58:29.965131 kubelet[2733]: I1216 12:58:29.965137 2733 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:58:29.965340 kubelet[2733]: I1216 12:58:29.965261 2733 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:58:29.965340 kubelet[2733]: I1216 12:58:29.965271 2733 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:58:29.965340 kubelet[2733]: I1216 12:58:29.965287 2733 policy_none.go:49] "None policy: Start" Dec 16 12:58:29.965340 kubelet[2733]: I1216 12:58:29.965296 2733 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:58:29.965340 kubelet[2733]: I1216 12:58:29.965306 2733 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:58:29.965486 kubelet[2733]: I1216 12:58:29.965387 2733 state_mem.go:75] "Updated machine memory state" Dec 16 12:58:29.970302 kubelet[2733]: E1216 12:58:29.970247 2733 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:58:29.970439 kubelet[2733]: I1216 12:58:29.970420 2733 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:58:29.970512 kubelet[2733]: I1216 12:58:29.970435 2733 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:58:29.970667 kubelet[2733]: I1216 12:58:29.970635 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:58:29.973159 kubelet[2733]: E1216 12:58:29.973127 2733 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:58:30.027796 kubelet[2733]: I1216 12:58:30.027726 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:30.027974 kubelet[2733]: I1216 12:58:30.027935 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:30.030324 kubelet[2733]: I1216 12:58:30.028005 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:30.072855 kubelet[2733]: I1216 12:58:30.072727 2733 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:58:30.080346 kubelet[2733]: I1216 12:58:30.080303 2733 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:58:30.080535 kubelet[2733]: I1216 12:58:30.080381 2733 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:58:30.217141 kubelet[2733]: I1216 12:58:30.217071 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c36b1fcebf9dcbe952e9ed56c3e2b6f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c36b1fcebf9dcbe952e9ed56c3e2b6f7\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:30.217141 kubelet[2733]: I1216 12:58:30.217125 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c36b1fcebf9dcbe952e9ed56c3e2b6f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c36b1fcebf9dcbe952e9ed56c3e2b6f7\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:30.217141 kubelet[2733]: I1216 12:58:30.217144 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:30.217368 kubelet[2733]: I1216 12:58:30.217164 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c36b1fcebf9dcbe952e9ed56c3e2b6f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c36b1fcebf9dcbe952e9ed56c3e2b6f7\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:30.217368 kubelet[2733]: I1216 12:58:30.217201 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:30.217368 kubelet[2733]: I1216 12:58:30.217251 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:30.217368 kubelet[2733]: I1216 12:58:30.217304 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:30.217368 kubelet[2733]: I1216 12:58:30.217362 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:30.217579 kubelet[2733]: I1216 12:58:30.217384 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:31.236472 kubelet[2733]: I1216 12:58:31.235254 2733 apiserver.go:52] "Watching apiserver" Dec 16 12:58:31.241457 kubelet[2733]: I1216 12:58:31.241395 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:31.261565 kubelet[2733]: E1216 12:58:31.261489 2733 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:31.275279 kubelet[2733]: I1216 12:58:31.275211 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.275195388 podStartE2EDuration="1.275195388s" podCreationTimestamp="2025-12-16 12:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:31.274881288 +0000 UTC m=+1.440584058" watchObservedRunningTime="2025-12-16 12:58:31.275195388 +0000 UTC m=+1.440898159" Dec 16 12:58:31.292038 kubelet[2733]: I1216 12:58:31.291837 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.2918188640000001 podStartE2EDuration="1.291818864s" podCreationTimestamp="2025-12-16 12:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:31.290104375 +0000 UTC m=+1.455807145" watchObservedRunningTime="2025-12-16 12:58:31.291818864 +0000 UTC m=+1.457521634" Dec 16 12:58:31.292038 kubelet[2733]: I1216 12:58:31.291929 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.291922643 podStartE2EDuration="1.291922643s" podCreationTimestamp="2025-12-16 12:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:31.283039772 +0000 UTC m=+1.448742542" watchObservedRunningTime="2025-12-16 12:58:31.291922643 +0000 UTC m=+1.457625423" Dec 16 12:58:31.316418 kubelet[2733]: I1216 12:58:31.316367 2733 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:58:34.209342 kubelet[2733]: I1216 12:58:34.209293 2733 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:58:34.209918 kubelet[2733]: I1216 12:58:34.209803 2733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:58:34.209958 containerd[1589]: time="2025-12-16T12:58:34.209668517Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:58:34.762697 systemd[1]: Created slice kubepods-besteffort-pode96415d8_cc65_4108_9468_6b1ac69d6916.slice - libcontainer container kubepods-besteffort-pode96415d8_cc65_4108_9468_6b1ac69d6916.slice. Dec 16 12:58:34.857664 kubelet[2733]: I1216 12:58:34.857611 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e96415d8-cc65-4108-9468-6b1ac69d6916-kube-proxy\") pod \"kube-proxy-8nsp9\" (UID: \"e96415d8-cc65-4108-9468-6b1ac69d6916\") " pod="kube-system/kube-proxy-8nsp9" Dec 16 12:58:34.857664 kubelet[2733]: I1216 12:58:34.857674 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e96415d8-cc65-4108-9468-6b1ac69d6916-xtables-lock\") pod \"kube-proxy-8nsp9\" (UID: \"e96415d8-cc65-4108-9468-6b1ac69d6916\") " pod="kube-system/kube-proxy-8nsp9" Dec 16 12:58:34.857867 kubelet[2733]: I1216 12:58:34.857692 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e96415d8-cc65-4108-9468-6b1ac69d6916-lib-modules\") pod \"kube-proxy-8nsp9\" (UID: \"e96415d8-cc65-4108-9468-6b1ac69d6916\") " pod="kube-system/kube-proxy-8nsp9" Dec 16 12:58:34.857867 kubelet[2733]: I1216 12:58:34.857710 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxk6\" (UniqueName: \"kubernetes.io/projected/e96415d8-cc65-4108-9468-6b1ac69d6916-kube-api-access-tvxk6\") pod \"kube-proxy-8nsp9\" (UID: \"e96415d8-cc65-4108-9468-6b1ac69d6916\") " pod="kube-system/kube-proxy-8nsp9" Dec 16 12:58:35.077367 containerd[1589]: time="2025-12-16T12:58:35.077215060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8nsp9,Uid:e96415d8-cc65-4108-9468-6b1ac69d6916,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:35.103353 containerd[1589]: time="2025-12-16T12:58:35.103295049Z" level=info msg="connecting to shim 842187043f18b48ba778c79114e0174112680efd662b7086b2f9a7feb47bbb81" address="unix:///run/containerd/s/f62a3fe4dcd19f36d7ebc84362effe7cefc870a2314a1689c7ca18e0307b2abd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:35.131710 systemd[1]: Started cri-containerd-842187043f18b48ba778c79114e0174112680efd662b7086b2f9a7feb47bbb81.scope - libcontainer container 842187043f18b48ba778c79114e0174112680efd662b7086b2f9a7feb47bbb81. Dec 16 12:58:35.161268 containerd[1589]: time="2025-12-16T12:58:35.161216479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8nsp9,Uid:e96415d8-cc65-4108-9468-6b1ac69d6916,Namespace:kube-system,Attempt:0,} returns sandbox id \"842187043f18b48ba778c79114e0174112680efd662b7086b2f9a7feb47bbb81\"" Dec 16 12:58:35.167393 containerd[1589]: time="2025-12-16T12:58:35.167342924Z" level=info msg="CreateContainer within sandbox \"842187043f18b48ba778c79114e0174112680efd662b7086b2f9a7feb47bbb81\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:58:35.181177 containerd[1589]: time="2025-12-16T12:58:35.181103195Z" level=info msg="Container 8ed6ca231c3fcc47cb54fd0bce32a7e10d9c162da8546a20eb8b5acd1da6484b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:35.190828 containerd[1589]: time="2025-12-16T12:58:35.190770924Z" level=info msg="CreateContainer within sandbox \"842187043f18b48ba778c79114e0174112680efd662b7086b2f9a7feb47bbb81\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8ed6ca231c3fcc47cb54fd0bce32a7e10d9c162da8546a20eb8b5acd1da6484b\"" Dec 16 12:58:35.191435 containerd[1589]: time="2025-12-16T12:58:35.191329207Z" level=info msg="StartContainer for \"8ed6ca231c3fcc47cb54fd0bce32a7e10d9c162da8546a20eb8b5acd1da6484b\"" Dec 16 12:58:35.192841 containerd[1589]: time="2025-12-16T12:58:35.192777224Z" level=info msg="connecting to shim 8ed6ca231c3fcc47cb54fd0bce32a7e10d9c162da8546a20eb8b5acd1da6484b" address="unix:///run/containerd/s/f62a3fe4dcd19f36d7ebc84362effe7cefc870a2314a1689c7ca18e0307b2abd" protocol=ttrpc version=3 Dec 16 12:58:35.224802 systemd[1]: Started cri-containerd-8ed6ca231c3fcc47cb54fd0bce32a7e10d9c162da8546a20eb8b5acd1da6484b.scope - libcontainer container 8ed6ca231c3fcc47cb54fd0bce32a7e10d9c162da8546a20eb8b5acd1da6484b. Dec 16 12:58:35.311228 containerd[1589]: time="2025-12-16T12:58:35.311179435Z" level=info msg="StartContainer for \"8ed6ca231c3fcc47cb54fd0bce32a7e10d9c162da8546a20eb8b5acd1da6484b\" returns successfully" Dec 16 12:58:35.728789 systemd[1]: Created slice kubepods-besteffort-podd65a31da_e74a_490b_9b61_05e603a4bc60.slice - libcontainer container kubepods-besteffort-podd65a31da_e74a_490b_9b61_05e603a4bc60.slice. Dec 16 12:58:35.764934 kubelet[2733]: I1216 12:58:35.764880 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxhbd\" (UniqueName: \"kubernetes.io/projected/d65a31da-e74a-490b-9b61-05e603a4bc60-kube-api-access-pxhbd\") pod \"tigera-operator-7dcd859c48-gmzhd\" (UID: \"d65a31da-e74a-490b-9b61-05e603a4bc60\") " pod="tigera-operator/tigera-operator-7dcd859c48-gmzhd" Dec 16 12:58:35.764934 kubelet[2733]: I1216 12:58:35.764934 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d65a31da-e74a-490b-9b61-05e603a4bc60-var-lib-calico\") pod \"tigera-operator-7dcd859c48-gmzhd\" (UID: \"d65a31da-e74a-490b-9b61-05e603a4bc60\") " pod="tigera-operator/tigera-operator-7dcd859c48-gmzhd" Dec 16 12:58:36.034602 containerd[1589]: time="2025-12-16T12:58:36.034461265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gmzhd,Uid:d65a31da-e74a-490b-9b61-05e603a4bc60,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:58:36.076272 containerd[1589]: time="2025-12-16T12:58:36.076207187Z" level=info msg="connecting to shim cdbc6be2fee21ac6362244c33d5300e652a17deb11d92cd08b88eb0fa4eb8e5a" address="unix:///run/containerd/s/0d58e7e241c18eb22bbfdb6af31c914312d235bbf2be600732d93cdc770a9c1c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:36.098635 systemd[1]: Started cri-containerd-cdbc6be2fee21ac6362244c33d5300e652a17deb11d92cd08b88eb0fa4eb8e5a.scope - libcontainer container cdbc6be2fee21ac6362244c33d5300e652a17deb11d92cd08b88eb0fa4eb8e5a. Dec 16 12:58:36.145935 containerd[1589]: time="2025-12-16T12:58:36.145884013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gmzhd,Uid:d65a31da-e74a-490b-9b61-05e603a4bc60,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cdbc6be2fee21ac6362244c33d5300e652a17deb11d92cd08b88eb0fa4eb8e5a\"" Dec 16 12:58:36.149311 containerd[1589]: time="2025-12-16T12:58:36.149267430Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:58:37.830341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744894919.mount: Deactivated successfully. Dec 16 12:58:38.162157 containerd[1589]: time="2025-12-16T12:58:38.162103188Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:38.163140 containerd[1589]: time="2025-12-16T12:58:38.163104389Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 16 12:58:38.164386 containerd[1589]: time="2025-12-16T12:58:38.164322834Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:38.166427 containerd[1589]: time="2025-12-16T12:58:38.166390640Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:38.167064 containerd[1589]: time="2025-12-16T12:58:38.167004436Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.017651053s" Dec 16 12:58:38.167064 containerd[1589]: time="2025-12-16T12:58:38.167048469Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 12:58:38.172273 containerd[1589]: time="2025-12-16T12:58:38.172246310Z" level=info msg="CreateContainer within sandbox \"cdbc6be2fee21ac6362244c33d5300e652a17deb11d92cd08b88eb0fa4eb8e5a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:58:38.179077 containerd[1589]: time="2025-12-16T12:58:38.179037426Z" level=info msg="Container 83c5d0584cde6542e786d13c1f5aafc6708836cce976085f14777e98085e50c7: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:38.186717 containerd[1589]: time="2025-12-16T12:58:38.186681172Z" level=info msg="CreateContainer within sandbox \"cdbc6be2fee21ac6362244c33d5300e652a17deb11d92cd08b88eb0fa4eb8e5a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"83c5d0584cde6542e786d13c1f5aafc6708836cce976085f14777e98085e50c7\"" Dec 16 12:58:38.188311 containerd[1589]: time="2025-12-16T12:58:38.188257165Z" level=info msg="StartContainer for \"83c5d0584cde6542e786d13c1f5aafc6708836cce976085f14777e98085e50c7\"" Dec 16 12:58:38.189586 containerd[1589]: time="2025-12-16T12:58:38.189481740Z" level=info msg="connecting to shim 83c5d0584cde6542e786d13c1f5aafc6708836cce976085f14777e98085e50c7" address="unix:///run/containerd/s/0d58e7e241c18eb22bbfdb6af31c914312d235bbf2be600732d93cdc770a9c1c" protocol=ttrpc version=3 Dec 16 12:58:38.230626 systemd[1]: Started cri-containerd-83c5d0584cde6542e786d13c1f5aafc6708836cce976085f14777e98085e50c7.scope - libcontainer container 83c5d0584cde6542e786d13c1f5aafc6708836cce976085f14777e98085e50c7. Dec 16 12:58:38.263717 containerd[1589]: time="2025-12-16T12:58:38.263678119Z" level=info msg="StartContainer for \"83c5d0584cde6542e786d13c1f5aafc6708836cce976085f14777e98085e50c7\" returns successfully" Dec 16 12:58:39.276090 kubelet[2733]: I1216 12:58:39.275987 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8nsp9" podStartSLOduration=5.275971611 podStartE2EDuration="5.275971611s" podCreationTimestamp="2025-12-16 12:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:36.265634203 +0000 UTC m=+6.431336993" watchObservedRunningTime="2025-12-16 12:58:39.275971611 +0000 UTC m=+9.441674381" Dec 16 12:58:39.276651 kubelet[2733]: I1216 12:58:39.276102 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-gmzhd" podStartSLOduration=2.255387762 podStartE2EDuration="4.276097931s" podCreationTimestamp="2025-12-16 12:58:35 +0000 UTC" firstStartedPulling="2025-12-16 12:58:36.147056915 +0000 UTC m=+6.312759685" lastFinishedPulling="2025-12-16 12:58:38.167767094 +0000 UTC m=+8.333469854" observedRunningTime="2025-12-16 12:58:39.275753116 +0000 UTC m=+9.441455896" watchObservedRunningTime="2025-12-16 12:58:39.276097931 +0000 UTC m=+9.441800701" Dec 16 12:58:43.629641 update_engine[1563]: I20251216 12:58:43.629552 1563 update_attempter.cc:509] Updating boot flags... Dec 16 12:58:44.947049 sudo[1785]: pam_unix(sudo:session): session closed for user root Dec 16 12:58:44.948626 sshd[1784]: Connection closed by 10.0.0.1 port 55530 Dec 16 12:58:44.949069 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:44.952551 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:55530.service: Deactivated successfully. Dec 16 12:58:44.955244 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:58:44.955557 systemd[1]: session-7.scope: Consumed 5.517s CPU time, 221.6M memory peak. Dec 16 12:58:44.959101 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:58:44.960157 systemd-logind[1561]: Removed session 7. Dec 16 12:58:49.892625 systemd[1]: Created slice kubepods-besteffort-pod54a2bc22_fbe9_464a_b0a7_df45872b3817.slice - libcontainer container kubepods-besteffort-pod54a2bc22_fbe9_464a_b0a7_df45872b3817.slice. Dec 16 12:58:49.951151 kubelet[2733]: I1216 12:58:49.951049 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54a2bc22-fbe9-464a-b0a7-df45872b3817-tigera-ca-bundle\") pod \"calico-typha-6f498f57fb-b8mh6\" (UID: \"54a2bc22-fbe9-464a-b0a7-df45872b3817\") " pod="calico-system/calico-typha-6f498f57fb-b8mh6" Dec 16 12:58:49.951151 kubelet[2733]: I1216 12:58:49.951086 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/54a2bc22-fbe9-464a-b0a7-df45872b3817-typha-certs\") pod \"calico-typha-6f498f57fb-b8mh6\" (UID: \"54a2bc22-fbe9-464a-b0a7-df45872b3817\") " pod="calico-system/calico-typha-6f498f57fb-b8mh6" Dec 16 12:58:49.951151 kubelet[2733]: I1216 12:58:49.951102 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwx48\" (UniqueName: \"kubernetes.io/projected/54a2bc22-fbe9-464a-b0a7-df45872b3817-kube-api-access-qwx48\") pod \"calico-typha-6f498f57fb-b8mh6\" (UID: \"54a2bc22-fbe9-464a-b0a7-df45872b3817\") " pod="calico-system/calico-typha-6f498f57fb-b8mh6" Dec 16 12:58:49.955223 systemd[1]: Created slice kubepods-besteffort-pod9ff73383_32c4_4fe5_819d_18bedbf139be.slice - libcontainer container kubepods-besteffort-pod9ff73383_32c4_4fe5_819d_18bedbf139be.slice. Dec 16 12:58:50.051883 kubelet[2733]: I1216 12:58:50.051815 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9ff73383-32c4-4fe5-819d-18bedbf139be-cni-net-dir\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.051883 kubelet[2733]: I1216 12:58:50.051878 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ff73383-32c4-4fe5-819d-18bedbf139be-lib-modules\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.051883 kubelet[2733]: I1216 12:58:50.051899 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9ff73383-32c4-4fe5-819d-18bedbf139be-cni-log-dir\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.052744 kubelet[2733]: I1216 12:58:50.051919 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpzx4\" (UniqueName: \"kubernetes.io/projected/9ff73383-32c4-4fe5-819d-18bedbf139be-kube-api-access-bpzx4\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.052744 kubelet[2733]: I1216 12:58:50.051942 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9ff73383-32c4-4fe5-819d-18bedbf139be-flexvol-driver-host\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.052744 kubelet[2733]: I1216 12:58:50.051961 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ff73383-32c4-4fe5-819d-18bedbf139be-tigera-ca-bundle\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.052744 kubelet[2733]: I1216 12:58:50.051979 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9ff73383-32c4-4fe5-819d-18bedbf139be-var-run-calico\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.052744 kubelet[2733]: I1216 12:58:50.052017 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9ff73383-32c4-4fe5-819d-18bedbf139be-cni-bin-dir\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.052983 kubelet[2733]: I1216 12:58:50.052175 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9ff73383-32c4-4fe5-819d-18bedbf139be-node-certs\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.052983 kubelet[2733]: I1216 12:58:50.052252 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ff73383-32c4-4fe5-819d-18bedbf139be-xtables-lock\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.052983 kubelet[2733]: I1216 12:58:50.052356 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9ff73383-32c4-4fe5-819d-18bedbf139be-policysync\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.052983 kubelet[2733]: I1216 12:58:50.052422 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9ff73383-32c4-4fe5-819d-18bedbf139be-var-lib-calico\") pod \"calico-node-mvc4d\" (UID: \"9ff73383-32c4-4fe5-819d-18bedbf139be\") " pod="calico-system/calico-node-mvc4d" Dec 16 12:58:50.067382 kubelet[2733]: E1216 12:58:50.067232 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:58:50.153320 kubelet[2733]: I1216 12:58:50.152703 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/326be917-499c-401a-aae1-40840ab247ef-varrun\") pod \"csi-node-driver-jv88p\" (UID: \"326be917-499c-401a-aae1-40840ab247ef\") " pod="calico-system/csi-node-driver-jv88p" Dec 16 12:58:50.153320 kubelet[2733]: I1216 12:58:50.152823 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/326be917-499c-401a-aae1-40840ab247ef-socket-dir\") pod \"csi-node-driver-jv88p\" (UID: \"326be917-499c-401a-aae1-40840ab247ef\") " pod="calico-system/csi-node-driver-jv88p" Dec 16 12:58:50.153320 kubelet[2733]: I1216 12:58:50.152848 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlz62\" (UniqueName: \"kubernetes.io/projected/326be917-499c-401a-aae1-40840ab247ef-kube-api-access-hlz62\") pod \"csi-node-driver-jv88p\" (UID: \"326be917-499c-401a-aae1-40840ab247ef\") " pod="calico-system/csi-node-driver-jv88p" Dec 16 12:58:50.153320 kubelet[2733]: I1216 12:58:50.152886 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/326be917-499c-401a-aae1-40840ab247ef-kubelet-dir\") pod \"csi-node-driver-jv88p\" (UID: \"326be917-499c-401a-aae1-40840ab247ef\") " pod="calico-system/csi-node-driver-jv88p" Dec 16 12:58:50.153320 kubelet[2733]: I1216 12:58:50.152919 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/326be917-499c-401a-aae1-40840ab247ef-registration-dir\") pod \"csi-node-driver-jv88p\" (UID: \"326be917-499c-401a-aae1-40840ab247ef\") " pod="calico-system/csi-node-driver-jv88p" Dec 16 12:58:50.154213 kubelet[2733]: E1216 12:58:50.154140 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.154213 kubelet[2733]: W1216 12:58:50.154162 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.154213 kubelet[2733]: E1216 12:58:50.154180 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.154765 kubelet[2733]: E1216 12:58:50.154438 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.154765 kubelet[2733]: W1216 12:58:50.154488 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.154765 kubelet[2733]: E1216 12:58:50.154532 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.155007 kubelet[2733]: E1216 12:58:50.154898 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.155007 kubelet[2733]: W1216 12:58:50.154912 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.155007 kubelet[2733]: E1216 12:58:50.154938 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.155207 kubelet[2733]: E1216 12:58:50.155189 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.155207 kubelet[2733]: W1216 12:58:50.155204 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.155207 kubelet[2733]: E1216 12:58:50.155215 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.155521 kubelet[2733]: E1216 12:58:50.155424 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.155521 kubelet[2733]: W1216 12:58:50.155442 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.155521 kubelet[2733]: E1216 12:58:50.155453 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.155888 kubelet[2733]: E1216 12:58:50.155842 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.155888 kubelet[2733]: W1216 12:58:50.155863 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.155888 kubelet[2733]: E1216 12:58:50.155877 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.156137 kubelet[2733]: E1216 12:58:50.156114 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.156137 kubelet[2733]: W1216 12:58:50.156133 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.156250 kubelet[2733]: E1216 12:58:50.156144 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.156569 kubelet[2733]: E1216 12:58:50.156553 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.156569 kubelet[2733]: W1216 12:58:50.156565 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.156670 kubelet[2733]: E1216 12:58:50.156577 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.157731 kubelet[2733]: E1216 12:58:50.157627 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.157731 kubelet[2733]: W1216 12:58:50.157641 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.157731 kubelet[2733]: E1216 12:58:50.157652 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.158067 kubelet[2733]: E1216 12:58:50.157961 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.158067 kubelet[2733]: W1216 12:58:50.157977 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.158067 kubelet[2733]: E1216 12:58:50.157988 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.158517 kubelet[2733]: E1216 12:58:50.158227 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.158517 kubelet[2733]: W1216 12:58:50.158240 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.158517 kubelet[2733]: E1216 12:58:50.158250 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.159213 kubelet[2733]: E1216 12:58:50.158821 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.159213 kubelet[2733]: W1216 12:58:50.158837 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.159213 kubelet[2733]: E1216 12:58:50.159148 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.162612 kubelet[2733]: E1216 12:58:50.162568 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.162612 kubelet[2733]: W1216 12:58:50.162604 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.162780 kubelet[2733]: E1216 12:58:50.162625 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.162921 kubelet[2733]: E1216 12:58:50.162900 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.162921 kubelet[2733]: W1216 12:58:50.162918 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.163091 kubelet[2733]: E1216 12:58:50.162932 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.163185 kubelet[2733]: E1216 12:58:50.163162 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.163185 kubelet[2733]: W1216 12:58:50.163181 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.163586 kubelet[2733]: E1216 12:58:50.163191 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.164600 kubelet[2733]: E1216 12:58:50.164574 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.164600 kubelet[2733]: W1216 12:58:50.164598 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.164778 kubelet[2733]: E1216 12:58:50.164613 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.164871 kubelet[2733]: E1216 12:58:50.164844 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.164871 kubelet[2733]: W1216 12:58:50.164863 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.165011 kubelet[2733]: E1216 12:58:50.164875 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.169357 kubelet[2733]: E1216 12:58:50.165074 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.169357 kubelet[2733]: W1216 12:58:50.165087 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.169357 kubelet[2733]: E1216 12:58:50.165096 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.169357 kubelet[2733]: E1216 12:58:50.165337 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.169357 kubelet[2733]: W1216 12:58:50.165346 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.169357 kubelet[2733]: E1216 12:58:50.165356 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.169357 kubelet[2733]: E1216 12:58:50.165652 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.169357 kubelet[2733]: W1216 12:58:50.165661 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.169357 kubelet[2733]: E1216 12:58:50.165670 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.169357 kubelet[2733]: E1216 12:58:50.166046 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.169688 kubelet[2733]: W1216 12:58:50.166065 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.169688 kubelet[2733]: E1216 12:58:50.166079 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.169688 kubelet[2733]: E1216 12:58:50.166262 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.169688 kubelet[2733]: W1216 12:58:50.166271 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.169688 kubelet[2733]: E1216 12:58:50.166281 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.169688 kubelet[2733]: E1216 12:58:50.166478 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.169688 kubelet[2733]: W1216 12:58:50.166548 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.169688 kubelet[2733]: E1216 12:58:50.166561 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.169688 kubelet[2733]: E1216 12:58:50.166776 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.169688 kubelet[2733]: W1216 12:58:50.166785 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170193 kubelet[2733]: E1216 12:58:50.166795 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170193 kubelet[2733]: E1216 12:58:50.167001 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170193 kubelet[2733]: W1216 12:58:50.167010 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170193 kubelet[2733]: E1216 12:58:50.167019 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170193 kubelet[2733]: E1216 12:58:50.167221 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170193 kubelet[2733]: W1216 12:58:50.167230 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170193 kubelet[2733]: E1216 12:58:50.167240 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170193 kubelet[2733]: E1216 12:58:50.167479 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170193 kubelet[2733]: W1216 12:58:50.167505 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170193 kubelet[2733]: E1216 12:58:50.167516 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170412 kubelet[2733]: E1216 12:58:50.167734 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170412 kubelet[2733]: W1216 12:58:50.167743 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170412 kubelet[2733]: E1216 12:58:50.167753 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170412 kubelet[2733]: E1216 12:58:50.168012 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170412 kubelet[2733]: W1216 12:58:50.168021 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170412 kubelet[2733]: E1216 12:58:50.168031 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170412 kubelet[2733]: E1216 12:58:50.168340 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170412 kubelet[2733]: W1216 12:58:50.168350 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170412 kubelet[2733]: E1216 12:58:50.168360 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170412 kubelet[2733]: E1216 12:58:50.168615 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170644 kubelet[2733]: W1216 12:58:50.168624 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170644 kubelet[2733]: E1216 12:58:50.168634 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170644 kubelet[2733]: E1216 12:58:50.168874 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170644 kubelet[2733]: W1216 12:58:50.168887 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170644 kubelet[2733]: E1216 12:58:50.168897 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170644 kubelet[2733]: E1216 12:58:50.169200 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170644 kubelet[2733]: W1216 12:58:50.169210 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170644 kubelet[2733]: E1216 12:58:50.169221 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170644 kubelet[2733]: E1216 12:58:50.169427 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170644 kubelet[2733]: W1216 12:58:50.169437 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170889 kubelet[2733]: E1216 12:58:50.169447 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170889 kubelet[2733]: E1216 12:58:50.169728 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170889 kubelet[2733]: W1216 12:58:50.169739 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170889 kubelet[2733]: E1216 12:58:50.169750 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170889 kubelet[2733]: E1216 12:58:50.169995 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170889 kubelet[2733]: W1216 12:58:50.170004 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170889 kubelet[2733]: E1216 12:58:50.170046 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.170889 kubelet[2733]: E1216 12:58:50.170303 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.170889 kubelet[2733]: W1216 12:58:50.170315 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.170889 kubelet[2733]: E1216 12:58:50.170326 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.171119 kubelet[2733]: E1216 12:58:50.170598 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.171119 kubelet[2733]: W1216 12:58:50.170610 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.171119 kubelet[2733]: E1216 12:58:50.170620 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.171119 kubelet[2733]: E1216 12:58:50.170845 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.171119 kubelet[2733]: W1216 12:58:50.170855 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.171119 kubelet[2733]: E1216 12:58:50.170869 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.171119 kubelet[2733]: E1216 12:58:50.171113 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.171119 kubelet[2733]: W1216 12:58:50.171123 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.171299 kubelet[2733]: E1216 12:58:50.171134 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.172578 kubelet[2733]: E1216 12:58:50.172549 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.172578 kubelet[2733]: W1216 12:58:50.172565 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.172578 kubelet[2733]: E1216 12:58:50.172586 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.173097 kubelet[2733]: E1216 12:58:50.172830 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.173097 kubelet[2733]: W1216 12:58:50.172840 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.173097 kubelet[2733]: E1216 12:58:50.172875 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.173262 kubelet[2733]: E1216 12:58:50.173142 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.173262 kubelet[2733]: W1216 12:58:50.173152 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.173262 kubelet[2733]: E1216 12:58:50.173162 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.180597 kubelet[2733]: E1216 12:58:50.180562 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.180597 kubelet[2733]: W1216 12:58:50.180587 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.180765 kubelet[2733]: E1216 12:58:50.180608 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.200859 containerd[1589]: time="2025-12-16T12:58:50.200792750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f498f57fb-b8mh6,Uid:54a2bc22-fbe9-464a-b0a7-df45872b3817,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:50.245754 containerd[1589]: time="2025-12-16T12:58:50.245696982Z" level=info msg="connecting to shim f90743285766b2439e10f042fd8e42139b4cf021a3330a4ad4e5e134d93e954f" address="unix:///run/containerd/s/920d9ac3fc3d159973d8a1d07403db4dd67ec3caf65f678d865da830877d42bb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:50.253481 kubelet[2733]: E1216 12:58:50.253439 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.253481 kubelet[2733]: W1216 12:58:50.253470 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.253481 kubelet[2733]: E1216 12:58:50.253506 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.253730 kubelet[2733]: E1216 12:58:50.253711 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.253730 kubelet[2733]: W1216 12:58:50.253724 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.253788 kubelet[2733]: E1216 12:58:50.253732 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.254000 kubelet[2733]: E1216 12:58:50.253984 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.254000 kubelet[2733]: W1216 12:58:50.253995 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.254056 kubelet[2733]: E1216 12:58:50.254003 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.254219 kubelet[2733]: E1216 12:58:50.254202 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.254219 kubelet[2733]: W1216 12:58:50.254214 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.254277 kubelet[2733]: E1216 12:58:50.254221 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.254444 kubelet[2733]: E1216 12:58:50.254429 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.254444 kubelet[2733]: W1216 12:58:50.254439 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.254528 kubelet[2733]: E1216 12:58:50.254448 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.254697 kubelet[2733]: E1216 12:58:50.254668 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.254697 kubelet[2733]: W1216 12:58:50.254685 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.254697 kubelet[2733]: E1216 12:58:50.254693 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.254916 kubelet[2733]: E1216 12:58:50.254900 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.254916 kubelet[2733]: W1216 12:58:50.254911 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.254994 kubelet[2733]: E1216 12:58:50.254920 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.255179 kubelet[2733]: E1216 12:58:50.255155 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.255179 kubelet[2733]: W1216 12:58:50.255166 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.255179 kubelet[2733]: E1216 12:58:50.255174 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.255392 kubelet[2733]: E1216 12:58:50.255378 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.255392 kubelet[2733]: W1216 12:58:50.255388 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.255438 kubelet[2733]: E1216 12:58:50.255396 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.255642 kubelet[2733]: E1216 12:58:50.255626 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.255642 kubelet[2733]: W1216 12:58:50.255637 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.255703 kubelet[2733]: E1216 12:58:50.255645 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.255870 kubelet[2733]: E1216 12:58:50.255854 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.255870 kubelet[2733]: W1216 12:58:50.255865 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.255927 kubelet[2733]: E1216 12:58:50.255873 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.256138 kubelet[2733]: E1216 12:58:50.256103 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.256138 kubelet[2733]: W1216 12:58:50.256118 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.256138 kubelet[2733]: E1216 12:58:50.256127 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.256421 kubelet[2733]: E1216 12:58:50.256404 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.256421 kubelet[2733]: W1216 12:58:50.256415 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.256421 kubelet[2733]: E1216 12:58:50.256422 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.256669 kubelet[2733]: E1216 12:58:50.256652 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.256669 kubelet[2733]: W1216 12:58:50.256664 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.256728 kubelet[2733]: E1216 12:58:50.256672 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.256890 kubelet[2733]: E1216 12:58:50.256874 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.256890 kubelet[2733]: W1216 12:58:50.256885 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.256938 kubelet[2733]: E1216 12:58:50.256894 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.257108 kubelet[2733]: E1216 12:58:50.257092 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.257108 kubelet[2733]: W1216 12:58:50.257103 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.257169 kubelet[2733]: E1216 12:58:50.257111 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.257696 kubelet[2733]: E1216 12:58:50.257674 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.257696 kubelet[2733]: W1216 12:58:50.257686 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.257696 kubelet[2733]: E1216 12:58:50.257694 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.257911 kubelet[2733]: E1216 12:58:50.257890 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.257911 kubelet[2733]: W1216 12:58:50.257902 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.257911 kubelet[2733]: E1216 12:58:50.257910 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.257991 containerd[1589]: time="2025-12-16T12:58:50.257895879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mvc4d,Uid:9ff73383-32c4-4fe5-819d-18bedbf139be,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:50.258149 kubelet[2733]: E1216 12:58:50.258132 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.258149 kubelet[2733]: W1216 12:58:50.258144 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.258195 kubelet[2733]: E1216 12:58:50.258153 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.258375 kubelet[2733]: E1216 12:58:50.258359 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.258375 kubelet[2733]: W1216 12:58:50.258370 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.258431 kubelet[2733]: E1216 12:58:50.258377 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.258664 kubelet[2733]: E1216 12:58:50.258648 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.258664 kubelet[2733]: W1216 12:58:50.258660 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.258715 kubelet[2733]: E1216 12:58:50.258668 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.258885 kubelet[2733]: E1216 12:58:50.258869 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.258885 kubelet[2733]: W1216 12:58:50.258880 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.258940 kubelet[2733]: E1216 12:58:50.258888 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.259268 kubelet[2733]: E1216 12:58:50.259246 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.259268 kubelet[2733]: W1216 12:58:50.259260 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.259268 kubelet[2733]: E1216 12:58:50.259268 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.260597 kubelet[2733]: E1216 12:58:50.260571 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.260597 kubelet[2733]: W1216 12:58:50.260588 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.260597 kubelet[2733]: E1216 12:58:50.260598 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.261156 kubelet[2733]: E1216 12:58:50.261135 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.261156 kubelet[2733]: W1216 12:58:50.261149 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.261231 kubelet[2733]: E1216 12:58:50.261160 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.270249 kubelet[2733]: E1216 12:58:50.270229 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:50.270249 kubelet[2733]: W1216 12:58:50.270246 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:50.270350 kubelet[2733]: E1216 12:58:50.270267 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:50.271642 systemd[1]: Started cri-containerd-f90743285766b2439e10f042fd8e42139b4cf021a3330a4ad4e5e134d93e954f.scope - libcontainer container f90743285766b2439e10f042fd8e42139b4cf021a3330a4ad4e5e134d93e954f. Dec 16 12:58:50.280747 containerd[1589]: time="2025-12-16T12:58:50.280705450Z" level=info msg="connecting to shim 4ea381b02a13e946046440a7b8ae3c3025ea65b9051edcb521be0012e0e1137c" address="unix:///run/containerd/s/94a9945282ddac822c2af9d75dd6bb61fad2e084c25f3731e9343d8d19f5f7e5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:50.313841 systemd[1]: Started cri-containerd-4ea381b02a13e946046440a7b8ae3c3025ea65b9051edcb521be0012e0e1137c.scope - libcontainer container 4ea381b02a13e946046440a7b8ae3c3025ea65b9051edcb521be0012e0e1137c. Dec 16 12:58:50.327717 containerd[1589]: time="2025-12-16T12:58:50.327676379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f498f57fb-b8mh6,Uid:54a2bc22-fbe9-464a-b0a7-df45872b3817,Namespace:calico-system,Attempt:0,} returns sandbox id \"f90743285766b2439e10f042fd8e42139b4cf021a3330a4ad4e5e134d93e954f\"" Dec 16 12:58:50.329602 containerd[1589]: time="2025-12-16T12:58:50.329570131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:58:50.351521 containerd[1589]: time="2025-12-16T12:58:50.351418820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mvc4d,Uid:9ff73383-32c4-4fe5-819d-18bedbf139be,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ea381b02a13e946046440a7b8ae3c3025ea65b9051edcb521be0012e0e1137c\"" Dec 16 12:58:51.926749 kubelet[2733]: E1216 12:58:51.926653 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:58:51.981184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1052808735.mount: Deactivated successfully. Dec 16 12:58:52.320036 containerd[1589]: time="2025-12-16T12:58:52.319926410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:52.320908 containerd[1589]: time="2025-12-16T12:58:52.320862364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 16 12:58:52.322191 containerd[1589]: time="2025-12-16T12:58:52.322157305Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:52.323950 containerd[1589]: time="2025-12-16T12:58:52.323903587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:52.324463 containerd[1589]: time="2025-12-16T12:58:52.324416774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.994813189s" Dec 16 12:58:52.324463 containerd[1589]: time="2025-12-16T12:58:52.324455557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 12:58:52.325406 containerd[1589]: time="2025-12-16T12:58:52.325373887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:58:52.340260 containerd[1589]: time="2025-12-16T12:58:52.340217172Z" level=info msg="CreateContainer within sandbox \"f90743285766b2439e10f042fd8e42139b4cf021a3330a4ad4e5e134d93e954f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:58:52.348923 containerd[1589]: time="2025-12-16T12:58:52.348855424Z" level=info msg="Container 09dbae9a7c092bfab65a196cd4de4d785c471a93f7cc40c3a7541176b779780e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:52.357827 containerd[1589]: time="2025-12-16T12:58:52.357785784Z" level=info msg="CreateContainer within sandbox \"f90743285766b2439e10f042fd8e42139b4cf021a3330a4ad4e5e134d93e954f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"09dbae9a7c092bfab65a196cd4de4d785c471a93f7cc40c3a7541176b779780e\"" Dec 16 12:58:52.358307 containerd[1589]: time="2025-12-16T12:58:52.358256692Z" level=info msg="StartContainer for \"09dbae9a7c092bfab65a196cd4de4d785c471a93f7cc40c3a7541176b779780e\"" Dec 16 12:58:52.359545 containerd[1589]: time="2025-12-16T12:58:52.359518991Z" level=info msg="connecting to shim 09dbae9a7c092bfab65a196cd4de4d785c471a93f7cc40c3a7541176b779780e" address="unix:///run/containerd/s/920d9ac3fc3d159973d8a1d07403db4dd67ec3caf65f678d865da830877d42bb" protocol=ttrpc version=3 Dec 16 12:58:52.391665 systemd[1]: Started cri-containerd-09dbae9a7c092bfab65a196cd4de4d785c471a93f7cc40c3a7541176b779780e.scope - libcontainer container 09dbae9a7c092bfab65a196cd4de4d785c471a93f7cc40c3a7541176b779780e. Dec 16 12:58:52.454802 containerd[1589]: time="2025-12-16T12:58:52.454738534Z" level=info msg="StartContainer for \"09dbae9a7c092bfab65a196cd4de4d785c471a93f7cc40c3a7541176b779780e\" returns successfully" Dec 16 12:58:53.308138 kubelet[2733]: I1216 12:58:53.308063 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f498f57fb-b8mh6" podStartSLOduration=2.311989257 podStartE2EDuration="4.308044758s" podCreationTimestamp="2025-12-16 12:58:49 +0000 UTC" firstStartedPulling="2025-12-16 12:58:50.329201436 +0000 UTC m=+20.494904206" lastFinishedPulling="2025-12-16 12:58:52.325256927 +0000 UTC m=+22.490959707" observedRunningTime="2025-12-16 12:58:53.307771284 +0000 UTC m=+23.473474064" watchObservedRunningTime="2025-12-16 12:58:53.308044758 +0000 UTC m=+23.473747528" Dec 16 12:58:53.350618 kubelet[2733]: E1216 12:58:53.350581 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.350618 kubelet[2733]: W1216 12:58:53.350603 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.350618 kubelet[2733]: E1216 12:58:53.350624 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.350819 kubelet[2733]: E1216 12:58:53.350777 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.350819 kubelet[2733]: W1216 12:58:53.350784 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.350819 kubelet[2733]: E1216 12:58:53.350791 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.351000 kubelet[2733]: E1216 12:58:53.350977 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.351000 kubelet[2733]: W1216 12:58:53.350993 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.351000 kubelet[2733]: E1216 12:58:53.351002 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.351330 kubelet[2733]: E1216 12:58:53.351272 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.351330 kubelet[2733]: W1216 12:58:53.351285 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.351330 kubelet[2733]: E1216 12:58:53.351299 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.351595 kubelet[2733]: E1216 12:58:53.351580 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.351595 kubelet[2733]: W1216 12:58:53.351592 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.351677 kubelet[2733]: E1216 12:58:53.351602 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.351799 kubelet[2733]: E1216 12:58:53.351785 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.351799 kubelet[2733]: W1216 12:58:53.351796 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.351884 kubelet[2733]: E1216 12:58:53.351804 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.352028 kubelet[2733]: E1216 12:58:53.351998 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.352028 kubelet[2733]: W1216 12:58:53.352023 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.352108 kubelet[2733]: E1216 12:58:53.352034 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.352203 kubelet[2733]: E1216 12:58:53.352189 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.352203 kubelet[2733]: W1216 12:58:53.352198 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.352203 kubelet[2733]: E1216 12:58:53.352204 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.352433 kubelet[2733]: E1216 12:58:53.352402 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.352433 kubelet[2733]: W1216 12:58:53.352423 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.352433 kubelet[2733]: E1216 12:58:53.352432 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.352639 kubelet[2733]: E1216 12:58:53.352624 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.352639 kubelet[2733]: W1216 12:58:53.352636 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.352734 kubelet[2733]: E1216 12:58:53.352646 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.352857 kubelet[2733]: E1216 12:58:53.352840 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.352857 kubelet[2733]: W1216 12:58:53.352853 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.352921 kubelet[2733]: E1216 12:58:53.352862 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.353513 kubelet[2733]: E1216 12:58:53.353051 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.353513 kubelet[2733]: W1216 12:58:53.353073 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.353513 kubelet[2733]: E1216 12:58:53.353083 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.353513 kubelet[2733]: E1216 12:58:53.353246 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.353513 kubelet[2733]: W1216 12:58:53.353254 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.353513 kubelet[2733]: E1216 12:58:53.353263 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.353513 kubelet[2733]: E1216 12:58:53.353410 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.353513 kubelet[2733]: W1216 12:58:53.353428 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.353513 kubelet[2733]: E1216 12:58:53.353437 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.353736 kubelet[2733]: E1216 12:58:53.353615 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.353736 kubelet[2733]: W1216 12:58:53.353623 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.353736 kubelet[2733]: E1216 12:58:53.353631 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.377757 kubelet[2733]: E1216 12:58:53.377716 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.377757 kubelet[2733]: W1216 12:58:53.377740 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.377757 kubelet[2733]: E1216 12:58:53.377763 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.378808 kubelet[2733]: E1216 12:58:53.378787 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.378808 kubelet[2733]: W1216 12:58:53.378805 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.378888 kubelet[2733]: E1216 12:58:53.378816 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.379509 kubelet[2733]: E1216 12:58:53.379116 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.379509 kubelet[2733]: W1216 12:58:53.379132 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.379509 kubelet[2733]: E1216 12:58:53.379143 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.379509 kubelet[2733]: E1216 12:58:53.379399 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.379509 kubelet[2733]: W1216 12:58:53.379408 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.379509 kubelet[2733]: E1216 12:58:53.379434 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.379721 kubelet[2733]: E1216 12:58:53.379634 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.379721 kubelet[2733]: W1216 12:58:53.379645 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.379721 kubelet[2733]: E1216 12:58:53.379656 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.379889 kubelet[2733]: E1216 12:58:53.379872 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.379889 kubelet[2733]: W1216 12:58:53.379885 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.379968 kubelet[2733]: E1216 12:58:53.379896 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.380469 kubelet[2733]: E1216 12:58:53.380453 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.380469 kubelet[2733]: W1216 12:58:53.380467 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.380561 kubelet[2733]: E1216 12:58:53.380478 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.380686 kubelet[2733]: E1216 12:58:53.380671 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.380713 kubelet[2733]: W1216 12:58:53.380687 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.380713 kubelet[2733]: E1216 12:58:53.380696 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.381537 kubelet[2733]: E1216 12:58:53.380924 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.381537 kubelet[2733]: W1216 12:58:53.380934 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.381537 kubelet[2733]: E1216 12:58:53.380944 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.381537 kubelet[2733]: E1216 12:58:53.381146 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.381537 kubelet[2733]: W1216 12:58:53.381155 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.381537 kubelet[2733]: E1216 12:58:53.381165 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.381537 kubelet[2733]: E1216 12:58:53.381403 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.381537 kubelet[2733]: W1216 12:58:53.381423 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.381537 kubelet[2733]: E1216 12:58:53.381436 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.381736 kubelet[2733]: E1216 12:58:53.381699 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.381736 kubelet[2733]: W1216 12:58:53.381709 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.381736 kubelet[2733]: E1216 12:58:53.381720 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.382146 kubelet[2733]: E1216 12:58:53.382118 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.382146 kubelet[2733]: W1216 12:58:53.382134 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.382146 kubelet[2733]: E1216 12:58:53.382145 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.382404 kubelet[2733]: E1216 12:58:53.382382 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.382404 kubelet[2733]: W1216 12:58:53.382397 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.382465 kubelet[2733]: E1216 12:58:53.382408 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.382683 kubelet[2733]: E1216 12:58:53.382665 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.382683 kubelet[2733]: W1216 12:58:53.382681 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.382734 kubelet[2733]: E1216 12:58:53.382691 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.382962 kubelet[2733]: E1216 12:58:53.382943 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.382962 kubelet[2733]: W1216 12:58:53.382959 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.383012 kubelet[2733]: E1216 12:58:53.382970 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.383301 kubelet[2733]: E1216 12:58:53.383281 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.383301 kubelet[2733]: W1216 12:58:53.383297 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.383443 kubelet[2733]: E1216 12:58:53.383308 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.383637 kubelet[2733]: E1216 12:58:53.383622 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:53.383637 kubelet[2733]: W1216 12:58:53.383634 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:53.383722 kubelet[2733]: E1216 12:58:53.383644 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:53.758605 containerd[1589]: time="2025-12-16T12:58:53.758550432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:53.759787 containerd[1589]: time="2025-12-16T12:58:53.759753929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 16 12:58:53.760953 containerd[1589]: time="2025-12-16T12:58:53.760908294Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:53.763332 containerd[1589]: time="2025-12-16T12:58:53.763288699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:53.763835 containerd[1589]: time="2025-12-16T12:58:53.763800113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.438392502s" Dec 16 12:58:53.763835 containerd[1589]: time="2025-12-16T12:58:53.763826743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 12:58:53.767903 containerd[1589]: time="2025-12-16T12:58:53.767875421Z" level=info msg="CreateContainer within sandbox \"4ea381b02a13e946046440a7b8ae3c3025ea65b9051edcb521be0012e0e1137c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:58:53.777867 containerd[1589]: time="2025-12-16T12:58:53.777833082Z" level=info msg="Container 3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:53.786886 containerd[1589]: time="2025-12-16T12:58:53.786847967Z" level=info msg="CreateContainer within sandbox \"4ea381b02a13e946046440a7b8ae3c3025ea65b9051edcb521be0012e0e1137c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1\"" Dec 16 12:58:53.787482 containerd[1589]: time="2025-12-16T12:58:53.787455001Z" level=info msg="StartContainer for \"3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1\"" Dec 16 12:58:53.788886 containerd[1589]: time="2025-12-16T12:58:53.788856292Z" level=info msg="connecting to shim 3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1" address="unix:///run/containerd/s/94a9945282ddac822c2af9d75dd6bb61fad2e084c25f3731e9343d8d19f5f7e5" protocol=ttrpc version=3 Dec 16 12:58:53.811670 systemd[1]: Started cri-containerd-3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1.scope - libcontainer container 3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1. Dec 16 12:58:53.898897 containerd[1589]: time="2025-12-16T12:58:53.898842979Z" level=info msg="StartContainer for \"3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1\" returns successfully" Dec 16 12:58:53.908136 systemd[1]: cri-containerd-3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1.scope: Deactivated successfully. Dec 16 12:58:53.912884 containerd[1589]: time="2025-12-16T12:58:53.912834790Z" level=info msg="received container exit event container_id:\"3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1\" id:\"3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1\" pid:3445 exited_at:{seconds:1765889933 nanos:912104285}" Dec 16 12:58:53.926939 kubelet[2733]: E1216 12:58:53.926886 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:58:53.940546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c10d73377399e1c0fafa6ac550f856b3221f0e619eca70f53689e71a772bea1-rootfs.mount: Deactivated successfully. Dec 16 12:58:55.390975 containerd[1589]: time="2025-12-16T12:58:55.390917724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:58:55.926992 kubelet[2733]: E1216 12:58:55.926920 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:58:57.771358 containerd[1589]: time="2025-12-16T12:58:57.771296138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:57.772219 containerd[1589]: time="2025-12-16T12:58:57.772200671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 12:58:57.773468 containerd[1589]: time="2025-12-16T12:58:57.773442950Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:57.775855 containerd[1589]: time="2025-12-16T12:58:57.775810044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:57.776306 containerd[1589]: time="2025-12-16T12:58:57.776281051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.385322159s" Dec 16 12:58:57.776306 containerd[1589]: time="2025-12-16T12:58:57.776304946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 12:58:57.781519 containerd[1589]: time="2025-12-16T12:58:57.781460268Z" level=info msg="CreateContainer within sandbox \"4ea381b02a13e946046440a7b8ae3c3025ea65b9051edcb521be0012e0e1137c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:58:57.794109 containerd[1589]: time="2025-12-16T12:58:57.794078356Z" level=info msg="Container 16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:57.804139 containerd[1589]: time="2025-12-16T12:58:57.804103884Z" level=info msg="CreateContainer within sandbox \"4ea381b02a13e946046440a7b8ae3c3025ea65b9051edcb521be0012e0e1137c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8\"" Dec 16 12:58:57.804449 containerd[1589]: time="2025-12-16T12:58:57.804433014Z" level=info msg="StartContainer for \"16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8\"" Dec 16 12:58:57.805750 containerd[1589]: time="2025-12-16T12:58:57.805706140Z" level=info msg="connecting to shim 16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8" address="unix:///run/containerd/s/94a9945282ddac822c2af9d75dd6bb61fad2e084c25f3731e9343d8d19f5f7e5" protocol=ttrpc version=3 Dec 16 12:58:57.832635 systemd[1]: Started cri-containerd-16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8.scope - libcontainer container 16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8. Dec 16 12:58:57.927078 kubelet[2733]: E1216 12:58:57.927017 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:58:57.942687 containerd[1589]: time="2025-12-16T12:58:57.942643652Z" level=info msg="StartContainer for \"16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8\" returns successfully" Dec 16 12:58:59.898722 systemd[1]: cri-containerd-16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8.scope: Deactivated successfully. Dec 16 12:58:59.899048 systemd[1]: cri-containerd-16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8.scope: Consumed 687ms CPU time, 180.1M memory peak, 3.5M read from disk, 171.3M written to disk. Dec 16 12:58:59.915610 containerd[1589]: time="2025-12-16T12:58:59.915557401Z" level=info msg="received container exit event container_id:\"16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8\" id:\"16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8\" pid:3506 exited_at:{seconds:1765889939 nanos:900800310}" Dec 16 12:58:59.927422 kubelet[2733]: E1216 12:58:59.927379 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:58:59.942186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16c454877c65e5606af8881afe2caddbd71b23638bace163d26cf450e8e1e1f8-rootfs.mount: Deactivated successfully. Dec 16 12:59:00.224645 kubelet[2733]: I1216 12:59:00.224214 2733 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:59:00.555784 systemd[1]: Created slice kubepods-besteffort-pod7b1aa4d7_170e_4aba_a4ce_0095dc4bfb6c.slice - libcontainer container kubepods-besteffort-pod7b1aa4d7_170e_4aba_a4ce_0095dc4bfb6c.slice. Dec 16 12:59:00.566995 systemd[1]: Created slice kubepods-burstable-podae19310b_b8cf_4899_a55a_6c349ce6f20f.slice - libcontainer container kubepods-burstable-podae19310b_b8cf_4899_a55a_6c349ce6f20f.slice. Dec 16 12:59:00.575932 systemd[1]: Created slice kubepods-burstable-pod4d76e126_0792_4c26_bff3_5168fb16b8b9.slice - libcontainer container kubepods-burstable-pod4d76e126_0792_4c26_bff3_5168fb16b8b9.slice. Dec 16 12:59:00.584013 systemd[1]: Created slice kubepods-besteffort-pod14fa36c4_6467_4e58_88b1_8675a1ddf3eb.slice - libcontainer container kubepods-besteffort-pod14fa36c4_6467_4e58_88b1_8675a1ddf3eb.slice. Dec 16 12:59:00.590443 systemd[1]: Created slice kubepods-besteffort-pod4557e1c6_8a81_439a_bd50_8cd81381c4a7.slice - libcontainer container kubepods-besteffort-pod4557e1c6_8a81_439a_bd50_8cd81381c4a7.slice. Dec 16 12:59:00.597989 systemd[1]: Created slice kubepods-besteffort-pod5bb3cad0_1f8c_4e17_94fc_cb2abb08a3e9.slice - libcontainer container kubepods-besteffort-pod5bb3cad0_1f8c_4e17_94fc_cb2abb08a3e9.slice. Dec 16 12:59:00.606572 systemd[1]: Created slice kubepods-besteffort-pod63bfc57c_01f9_4eda_a8ed_3f2fe04dfe53.slice - libcontainer container kubepods-besteffort-pod63bfc57c_01f9_4eda_a8ed_3f2fe04dfe53.slice. Dec 16 12:59:00.632123 kubelet[2733]: I1216 12:59:00.632075 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b25q\" (UniqueName: \"kubernetes.io/projected/63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53-kube-api-access-6b25q\") pod \"calico-kube-controllers-8b9b958b-znds7\" (UID: \"63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53\") " pod="calico-system/calico-kube-controllers-8b9b958b-znds7" Dec 16 12:59:00.632283 kubelet[2733]: I1216 12:59:00.632177 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-whisker-backend-key-pair\") pod \"whisker-6d4b486f75-dpkxz\" (UID: \"7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c\") " pod="calico-system/whisker-6d4b486f75-dpkxz" Dec 16 12:59:00.632283 kubelet[2733]: I1216 12:59:00.632239 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d76e126-0792-4c26-bff3-5168fb16b8b9-config-volume\") pod \"coredns-674b8bbfcf-pmgkj\" (UID: \"4d76e126-0792-4c26-bff3-5168fb16b8b9\") " pod="kube-system/coredns-674b8bbfcf-pmgkj" Dec 16 12:59:00.632283 kubelet[2733]: I1216 12:59:00.632269 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/14fa36c4-6467-4e58-88b1-8675a1ddf3eb-calico-apiserver-certs\") pod \"calico-apiserver-766db845cb-vmh89\" (UID: \"14fa36c4-6467-4e58-88b1-8675a1ddf3eb\") " pod="calico-apiserver/calico-apiserver-766db845cb-vmh89" Dec 16 12:59:00.632378 kubelet[2733]: I1216 12:59:00.632325 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9-calico-apiserver-certs\") pod \"calico-apiserver-766db845cb-wm7wl\" (UID: \"5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9\") " pod="calico-apiserver/calico-apiserver-766db845cb-wm7wl" Dec 16 12:59:00.632378 kubelet[2733]: I1216 12:59:00.632354 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92c2m\" (UniqueName: \"kubernetes.io/projected/5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9-kube-api-access-92c2m\") pod \"calico-apiserver-766db845cb-wm7wl\" (UID: \"5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9\") " pod="calico-apiserver/calico-apiserver-766db845cb-wm7wl" Dec 16 12:59:00.632428 kubelet[2733]: I1216 12:59:00.632377 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53-tigera-ca-bundle\") pod \"calico-kube-controllers-8b9b958b-znds7\" (UID: \"63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53\") " pod="calico-system/calico-kube-controllers-8b9b958b-znds7" Dec 16 12:59:00.632428 kubelet[2733]: I1216 12:59:00.632400 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-whisker-ca-bundle\") pod \"whisker-6d4b486f75-dpkxz\" (UID: \"7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c\") " pod="calico-system/whisker-6d4b486f75-dpkxz" Dec 16 12:59:00.632428 kubelet[2733]: I1216 12:59:00.632423 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4557e1c6-8a81-439a-bd50-8cd81381c4a7-goldmane-key-pair\") pod \"goldmane-666569f655-44tcf\" (UID: \"4557e1c6-8a81-439a-bd50-8cd81381c4a7\") " pod="calico-system/goldmane-666569f655-44tcf" Dec 16 12:59:00.632524 kubelet[2733]: I1216 12:59:00.632446 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw4x8\" (UniqueName: \"kubernetes.io/projected/4d76e126-0792-4c26-bff3-5168fb16b8b9-kube-api-access-fw4x8\") pod \"coredns-674b8bbfcf-pmgkj\" (UID: \"4d76e126-0792-4c26-bff3-5168fb16b8b9\") " pod="kube-system/coredns-674b8bbfcf-pmgkj" Dec 16 12:59:00.632524 kubelet[2733]: I1216 12:59:00.632472 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae19310b-b8cf-4899-a55a-6c349ce6f20f-config-volume\") pod \"coredns-674b8bbfcf-nqc85\" (UID: \"ae19310b-b8cf-4899-a55a-6c349ce6f20f\") " pod="kube-system/coredns-674b8bbfcf-nqc85" Dec 16 12:59:00.632524 kubelet[2733]: I1216 12:59:00.632515 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4557e1c6-8a81-439a-bd50-8cd81381c4a7-goldmane-ca-bundle\") pod \"goldmane-666569f655-44tcf\" (UID: \"4557e1c6-8a81-439a-bd50-8cd81381c4a7\") " pod="calico-system/goldmane-666569f655-44tcf" Dec 16 12:59:00.632594 kubelet[2733]: I1216 12:59:00.632542 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvnhx\" (UniqueName: \"kubernetes.io/projected/4557e1c6-8a81-439a-bd50-8cd81381c4a7-kube-api-access-cvnhx\") pod \"goldmane-666569f655-44tcf\" (UID: \"4557e1c6-8a81-439a-bd50-8cd81381c4a7\") " pod="calico-system/goldmane-666569f655-44tcf" Dec 16 12:59:00.632624 kubelet[2733]: I1216 12:59:00.632603 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmr7c\" (UniqueName: \"kubernetes.io/projected/14fa36c4-6467-4e58-88b1-8675a1ddf3eb-kube-api-access-bmr7c\") pod \"calico-apiserver-766db845cb-vmh89\" (UID: \"14fa36c4-6467-4e58-88b1-8675a1ddf3eb\") " pod="calico-apiserver/calico-apiserver-766db845cb-vmh89" Dec 16 12:59:00.632672 kubelet[2733]: I1216 12:59:00.632647 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ssj\" (UniqueName: \"kubernetes.io/projected/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-kube-api-access-x6ssj\") pod \"whisker-6d4b486f75-dpkxz\" (UID: \"7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c\") " pod="calico-system/whisker-6d4b486f75-dpkxz" Dec 16 12:59:00.632701 kubelet[2733]: I1216 12:59:00.632681 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw5jz\" (UniqueName: \"kubernetes.io/projected/ae19310b-b8cf-4899-a55a-6c349ce6f20f-kube-api-access-fw5jz\") pod \"coredns-674b8bbfcf-nqc85\" (UID: \"ae19310b-b8cf-4899-a55a-6c349ce6f20f\") " pod="kube-system/coredns-674b8bbfcf-nqc85" Dec 16 12:59:00.632728 kubelet[2733]: I1216 12:59:00.632712 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4557e1c6-8a81-439a-bd50-8cd81381c4a7-config\") pod \"goldmane-666569f655-44tcf\" (UID: \"4557e1c6-8a81-439a-bd50-8cd81381c4a7\") " pod="calico-system/goldmane-666569f655-44tcf" Dec 16 12:59:00.860989 containerd[1589]: time="2025-12-16T12:59:00.860939748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d4b486f75-dpkxz,Uid:7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c,Namespace:calico-system,Attempt:0,}" Dec 16 12:59:00.871620 containerd[1589]: time="2025-12-16T12:59:00.871574831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nqc85,Uid:ae19310b-b8cf-4899-a55a-6c349ce6f20f,Namespace:kube-system,Attempt:0,}" Dec 16 12:59:00.886557 containerd[1589]: time="2025-12-16T12:59:00.886058221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pmgkj,Uid:4d76e126-0792-4c26-bff3-5168fb16b8b9,Namespace:kube-system,Attempt:0,}" Dec 16 12:59:00.887971 containerd[1589]: time="2025-12-16T12:59:00.887933107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766db845cb-vmh89,Uid:14fa36c4-6467-4e58-88b1-8675a1ddf3eb,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:59:00.897568 containerd[1589]: time="2025-12-16T12:59:00.897518976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-44tcf,Uid:4557e1c6-8a81-439a-bd50-8cd81381c4a7,Namespace:calico-system,Attempt:0,}" Dec 16 12:59:00.905136 containerd[1589]: time="2025-12-16T12:59:00.905105535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766db845cb-wm7wl,Uid:5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:59:00.910144 containerd[1589]: time="2025-12-16T12:59:00.910090872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b9b958b-znds7,Uid:63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53,Namespace:calico-system,Attempt:0,}" Dec 16 12:59:01.066421 containerd[1589]: time="2025-12-16T12:59:01.066269989Z" level=error msg="Failed to destroy network for sandbox \"3fea79728e26c79601f694eadcbf330aa33fea3f031f47cf962125901cfe9667\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.070753 systemd[1]: run-netns-cni\x2d3254a0af\x2d8f0b\x2d7112\x2d82ca\x2d634a9a285e69.mount: Deactivated successfully. Dec 16 12:59:01.076087 containerd[1589]: time="2025-12-16T12:59:01.073108088Z" level=error msg="Failed to destroy network for sandbox \"f636b188369a3d3aeb55167f0b0e505bb38f9bc15dd3fd96c2d39188064992ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.076653 containerd[1589]: time="2025-12-16T12:59:01.076625472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d4b486f75-dpkxz,Uid:7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fea79728e26c79601f694eadcbf330aa33fea3f031f47cf962125901cfe9667\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.079233 systemd[1]: run-netns-cni\x2d9fdc8944\x2da32a\x2d64a5\x2dd36a\x2daa8fe5d34abc.mount: Deactivated successfully. Dec 16 12:59:01.081798 containerd[1589]: time="2025-12-16T12:59:01.081769816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766db845cb-vmh89,Uid:14fa36c4-6467-4e58-88b1-8675a1ddf3eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f636b188369a3d3aeb55167f0b0e505bb38f9bc15dd3fd96c2d39188064992ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.081976 containerd[1589]: time="2025-12-16T12:59:01.081958120Z" level=error msg="Failed to destroy network for sandbox \"54ae25a5c93b38570d2a130c1f4c00bd86be3ae467637b9ec6d35f81e7fc35b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.083974 systemd[1]: run-netns-cni\x2d604365ce\x2dcb05\x2d708e\x2de2d4\x2d3b467fa15123.mount: Deactivated successfully. Dec 16 12:59:01.086885 containerd[1589]: time="2025-12-16T12:59:01.084975634Z" level=error msg="Failed to destroy network for sandbox \"b19fda2c0aa303ad379ed40159b5b6ab83cd3c9dd9c3660b9def195f1951745a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.089376 containerd[1589]: time="2025-12-16T12:59:01.088929930Z" level=error msg="Failed to destroy network for sandbox \"3a1e005aba40b92b6b535f27749befc1d6b673992f6be99218c7d1669f1ee9fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.089376 containerd[1589]: time="2025-12-16T12:59:01.089347996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766db845cb-wm7wl,Uid:5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19fda2c0aa303ad379ed40159b5b6ab83cd3c9dd9c3660b9def195f1951745a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.091044 containerd[1589]: time="2025-12-16T12:59:01.091011835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nqc85,Uid:ae19310b-b8cf-4899-a55a-6c349ce6f20f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ae25a5c93b38570d2a130c1f4c00bd86be3ae467637b9ec6d35f81e7fc35b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.091146 kubelet[2733]: E1216 12:59:01.091098 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fea79728e26c79601f694eadcbf330aa33fea3f031f47cf962125901cfe9667\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.091437 kubelet[2733]: E1216 12:59:01.091148 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19fda2c0aa303ad379ed40159b5b6ab83cd3c9dd9c3660b9def195f1951745a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.091437 kubelet[2733]: E1216 12:59:01.091153 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ae25a5c93b38570d2a130c1f4c00bd86be3ae467637b9ec6d35f81e7fc35b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.091437 kubelet[2733]: E1216 12:59:01.091184 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fea79728e26c79601f694eadcbf330aa33fea3f031f47cf962125901cfe9667\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d4b486f75-dpkxz" Dec 16 12:59:01.091437 kubelet[2733]: E1216 12:59:01.091256 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19fda2c0aa303ad379ed40159b5b6ab83cd3c9dd9c3660b9def195f1951745a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766db845cb-wm7wl" Dec 16 12:59:01.091584 kubelet[2733]: E1216 12:59:01.091276 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fea79728e26c79601f694eadcbf330aa33fea3f031f47cf962125901cfe9667\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d4b486f75-dpkxz" Dec 16 12:59:01.091584 kubelet[2733]: E1216 12:59:01.091318 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19fda2c0aa303ad379ed40159b5b6ab83cd3c9dd9c3660b9def195f1951745a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766db845cb-wm7wl" Dec 16 12:59:01.091584 kubelet[2733]: E1216 12:59:01.091341 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d4b486f75-dpkxz_calico-system(7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d4b486f75-dpkxz_calico-system(7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fea79728e26c79601f694eadcbf330aa33fea3f031f47cf962125901cfe9667\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d4b486f75-dpkxz" podUID="7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c" Dec 16 12:59:01.091672 kubelet[2733]: E1216 12:59:01.091256 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ae25a5c93b38570d2a130c1f4c00bd86be3ae467637b9ec6d35f81e7fc35b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nqc85" Dec 16 12:59:01.091672 kubelet[2733]: E1216 12:59:01.091381 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ae25a5c93b38570d2a130c1f4c00bd86be3ae467637b9ec6d35f81e7fc35b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nqc85" Dec 16 12:59:01.091672 kubelet[2733]: E1216 12:59:01.091391 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-766db845cb-wm7wl_calico-apiserver(5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-766db845cb-wm7wl_calico-apiserver(5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b19fda2c0aa303ad379ed40159b5b6ab83cd3c9dd9c3660b9def195f1951745a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-766db845cb-wm7wl" podUID="5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9" Dec 16 12:59:01.091760 kubelet[2733]: E1216 12:59:01.091438 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nqc85_kube-system(ae19310b-b8cf-4899-a55a-6c349ce6f20f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nqc85_kube-system(ae19310b-b8cf-4899-a55a-6c349ce6f20f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54ae25a5c93b38570d2a130c1f4c00bd86be3ae467637b9ec6d35f81e7fc35b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nqc85" podUID="ae19310b-b8cf-4899-a55a-6c349ce6f20f" Dec 16 12:59:01.091760 kubelet[2733]: E1216 12:59:01.091212 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f636b188369a3d3aeb55167f0b0e505bb38f9bc15dd3fd96c2d39188064992ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.091760 kubelet[2733]: E1216 12:59:01.091636 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f636b188369a3d3aeb55167f0b0e505bb38f9bc15dd3fd96c2d39188064992ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766db845cb-vmh89" Dec 16 12:59:01.091847 kubelet[2733]: E1216 12:59:01.091650 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f636b188369a3d3aeb55167f0b0e505bb38f9bc15dd3fd96c2d39188064992ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766db845cb-vmh89" Dec 16 12:59:01.091847 kubelet[2733]: E1216 12:59:01.091683 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-766db845cb-vmh89_calico-apiserver(14fa36c4-6467-4e58-88b1-8675a1ddf3eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-766db845cb-vmh89_calico-apiserver(14fa36c4-6467-4e58-88b1-8675a1ddf3eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f636b188369a3d3aeb55167f0b0e505bb38f9bc15dd3fd96c2d39188064992ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-766db845cb-vmh89" podUID="14fa36c4-6467-4e58-88b1-8675a1ddf3eb" Dec 16 12:59:01.093062 containerd[1589]: time="2025-12-16T12:59:01.093012066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b9b958b-znds7,Uid:63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1e005aba40b92b6b535f27749befc1d6b673992f6be99218c7d1669f1ee9fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.093724 kubelet[2733]: E1216 12:59:01.093694 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1e005aba40b92b6b535f27749befc1d6b673992f6be99218c7d1669f1ee9fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.093794 kubelet[2733]: E1216 12:59:01.093728 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1e005aba40b92b6b535f27749befc1d6b673992f6be99218c7d1669f1ee9fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b9b958b-znds7" Dec 16 12:59:01.093794 kubelet[2733]: E1216 12:59:01.093747 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1e005aba40b92b6b535f27749befc1d6b673992f6be99218c7d1669f1ee9fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b9b958b-znds7" Dec 16 12:59:01.093843 kubelet[2733]: E1216 12:59:01.093787 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8b9b958b-znds7_calico-system(63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8b9b958b-znds7_calico-system(63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a1e005aba40b92b6b535f27749befc1d6b673992f6be99218c7d1669f1ee9fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b9b958b-znds7" podUID="63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53" Dec 16 12:59:01.112262 containerd[1589]: time="2025-12-16T12:59:01.112116871Z" level=error msg="Failed to destroy network for sandbox \"789a0caba14785f9e4a1e0d98f00b1295e91fc9a2923d29edb3172437fd4a52a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.113662 containerd[1589]: time="2025-12-16T12:59:01.113600251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pmgkj,Uid:4d76e126-0792-4c26-bff3-5168fb16b8b9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"789a0caba14785f9e4a1e0d98f00b1295e91fc9a2923d29edb3172437fd4a52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.113983 kubelet[2733]: E1216 12:59:01.113933 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"789a0caba14785f9e4a1e0d98f00b1295e91fc9a2923d29edb3172437fd4a52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.114044 kubelet[2733]: E1216 12:59:01.114008 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"789a0caba14785f9e4a1e0d98f00b1295e91fc9a2923d29edb3172437fd4a52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pmgkj" Dec 16 12:59:01.114069 kubelet[2733]: E1216 12:59:01.114034 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"789a0caba14785f9e4a1e0d98f00b1295e91fc9a2923d29edb3172437fd4a52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pmgkj" Dec 16 12:59:01.114165 kubelet[2733]: E1216 12:59:01.114131 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pmgkj_kube-system(4d76e126-0792-4c26-bff3-5168fb16b8b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pmgkj_kube-system(4d76e126-0792-4c26-bff3-5168fb16b8b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"789a0caba14785f9e4a1e0d98f00b1295e91fc9a2923d29edb3172437fd4a52a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pmgkj" podUID="4d76e126-0792-4c26-bff3-5168fb16b8b9" Dec 16 12:59:01.119315 containerd[1589]: time="2025-12-16T12:59:01.119256828Z" level=error msg="Failed to destroy network for sandbox \"a4c72f5a7ef3b32506dd98cf907278cb73096ffae998999ca6d9d9f6e6edca69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.120506 containerd[1589]: time="2025-12-16T12:59:01.120458608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-44tcf,Uid:4557e1c6-8a81-439a-bd50-8cd81381c4a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c72f5a7ef3b32506dd98cf907278cb73096ffae998999ca6d9d9f6e6edca69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.120725 kubelet[2733]: E1216 12:59:01.120684 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c72f5a7ef3b32506dd98cf907278cb73096ffae998999ca6d9d9f6e6edca69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.120786 kubelet[2733]: E1216 12:59:01.120760 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c72f5a7ef3b32506dd98cf907278cb73096ffae998999ca6d9d9f6e6edca69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-44tcf" Dec 16 12:59:01.120786 kubelet[2733]: E1216 12:59:01.120783 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c72f5a7ef3b32506dd98cf907278cb73096ffae998999ca6d9d9f6e6edca69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-44tcf" Dec 16 12:59:01.120865 kubelet[2733]: E1216 12:59:01.120838 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-44tcf_calico-system(4557e1c6-8a81-439a-bd50-8cd81381c4a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-44tcf_calico-system(4557e1c6-8a81-439a-bd50-8cd81381c4a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4c72f5a7ef3b32506dd98cf907278cb73096ffae998999ca6d9d9f6e6edca69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-44tcf" podUID="4557e1c6-8a81-439a-bd50-8cd81381c4a7" Dec 16 12:59:01.408454 containerd[1589]: time="2025-12-16T12:59:01.408333138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:59:01.933325 systemd[1]: Created slice kubepods-besteffort-pod326be917_499c_401a_aae1_40840ab247ef.slice - libcontainer container kubepods-besteffort-pod326be917_499c_401a_aae1_40840ab247ef.slice. Dec 16 12:59:01.935430 containerd[1589]: time="2025-12-16T12:59:01.935393641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv88p,Uid:326be917-499c-401a-aae1-40840ab247ef,Namespace:calico-system,Attempt:0,}" Dec 16 12:59:01.944066 systemd[1]: run-netns-cni\x2d49657559\x2db54f\x2d8520\x2df0c1\x2dcb78b88e0854.mount: Deactivated successfully. Dec 16 12:59:01.944181 systemd[1]: run-netns-cni\x2dc213b903\x2db92d\x2d07eb\x2d5c63\x2dba7ce5f60425.mount: Deactivated successfully. Dec 16 12:59:01.944261 systemd[1]: run-netns-cni\x2d188abc59\x2defaa\x2d5f03\x2d7639\x2d3a23a462d3fd.mount: Deactivated successfully. Dec 16 12:59:01.944355 systemd[1]: run-netns-cni\x2da004af81\x2d59df\x2db3c1\x2da2cd\x2dc068ce7c3f44.mount: Deactivated successfully. Dec 16 12:59:01.982461 containerd[1589]: time="2025-12-16T12:59:01.982411174Z" level=error msg="Failed to destroy network for sandbox \"0231731939637bf4a58b9cc1cd1c3757db57cac22137c3cd959202e012d84bd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.983886 containerd[1589]: time="2025-12-16T12:59:01.983857784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv88p,Uid:326be917-499c-401a-aae1-40840ab247ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0231731939637bf4a58b9cc1cd1c3757db57cac22137c3cd959202e012d84bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.984529 kubelet[2733]: E1216 12:59:01.984098 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0231731939637bf4a58b9cc1cd1c3757db57cac22137c3cd959202e012d84bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:59:01.984529 kubelet[2733]: E1216 12:59:01.984174 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0231731939637bf4a58b9cc1cd1c3757db57cac22137c3cd959202e012d84bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jv88p" Dec 16 12:59:01.984529 kubelet[2733]: E1216 12:59:01.984205 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0231731939637bf4a58b9cc1cd1c3757db57cac22137c3cd959202e012d84bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jv88p" Dec 16 12:59:01.984671 kubelet[2733]: E1216 12:59:01.984258 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jv88p_calico-system(326be917-499c-401a-aae1-40840ab247ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jv88p_calico-system(326be917-499c-401a-aae1-40840ab247ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0231731939637bf4a58b9cc1cd1c3757db57cac22137c3cd959202e012d84bd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:59:01.984736 systemd[1]: run-netns-cni\x2d63045083\x2def10\x2d6eb7\x2d828c\x2de583bf9063cd.mount: Deactivated successfully. Dec 16 12:59:07.543659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890611515.mount: Deactivated successfully. Dec 16 12:59:08.440700 containerd[1589]: time="2025-12-16T12:59:08.440623227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:08.441870 containerd[1589]: time="2025-12-16T12:59:08.441826817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 12:59:08.443214 containerd[1589]: time="2025-12-16T12:59:08.443161906Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:08.445388 containerd[1589]: time="2025-12-16T12:59:08.445326642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:08.445813 containerd[1589]: time="2025-12-16T12:59:08.445763833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.037391382s" Dec 16 12:59:08.445813 containerd[1589]: time="2025-12-16T12:59:08.445796605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 12:59:08.460713 containerd[1589]: time="2025-12-16T12:59:08.460660546Z" level=info msg="CreateContainer within sandbox \"4ea381b02a13e946046440a7b8ae3c3025ea65b9051edcb521be0012e0e1137c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:59:08.472475 containerd[1589]: time="2025-12-16T12:59:08.472418272Z" level=info msg="Container cf53a1555ea840e256cfdc56cc96f261234e51e5c8a5a1c8f22b8b735e54ab19: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:59:08.486847 containerd[1589]: time="2025-12-16T12:59:08.486792203Z" level=info msg="CreateContainer within sandbox \"4ea381b02a13e946046440a7b8ae3c3025ea65b9051edcb521be0012e0e1137c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cf53a1555ea840e256cfdc56cc96f261234e51e5c8a5a1c8f22b8b735e54ab19\"" Dec 16 12:59:08.487395 containerd[1589]: time="2025-12-16T12:59:08.487365941Z" level=info msg="StartContainer for \"cf53a1555ea840e256cfdc56cc96f261234e51e5c8a5a1c8f22b8b735e54ab19\"" Dec 16 12:59:08.489107 containerd[1589]: time="2025-12-16T12:59:08.489075602Z" level=info msg="connecting to shim cf53a1555ea840e256cfdc56cc96f261234e51e5c8a5a1c8f22b8b735e54ab19" address="unix:///run/containerd/s/94a9945282ddac822c2af9d75dd6bb61fad2e084c25f3731e9343d8d19f5f7e5" protocol=ttrpc version=3 Dec 16 12:59:08.517684 systemd[1]: Started cri-containerd-cf53a1555ea840e256cfdc56cc96f261234e51e5c8a5a1c8f22b8b735e54ab19.scope - libcontainer container cf53a1555ea840e256cfdc56cc96f261234e51e5c8a5a1c8f22b8b735e54ab19. Dec 16 12:59:08.638124 containerd[1589]: time="2025-12-16T12:59:08.638083087Z" level=info msg="StartContainer for \"cf53a1555ea840e256cfdc56cc96f261234e51e5c8a5a1c8f22b8b735e54ab19\" returns successfully" Dec 16 12:59:08.713799 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:59:08.714063 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:59:08.826098 kubelet[2733]: I1216 12:59:08.825476 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-whisker-backend-key-pair\") pod \"7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c\" (UID: \"7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c\") " Dec 16 12:59:08.826098 kubelet[2733]: I1216 12:59:08.825706 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-whisker-ca-bundle\") pod \"7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c\" (UID: \"7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c\") " Dec 16 12:59:08.826098 kubelet[2733]: I1216 12:59:08.825731 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6ssj\" (UniqueName: \"kubernetes.io/projected/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-kube-api-access-x6ssj\") pod \"7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c\" (UID: \"7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c\") " Dec 16 12:59:08.826961 kubelet[2733]: I1216 12:59:08.826935 2733 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c" (UID: "7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:59:08.832532 kubelet[2733]: I1216 12:59:08.832457 2733 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-kube-api-access-x6ssj" (OuterVolumeSpecName: "kube-api-access-x6ssj") pod "7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c" (UID: "7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c"). InnerVolumeSpecName "kube-api-access-x6ssj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:59:08.834275 systemd[1]: var-lib-kubelet-pods-7b1aa4d7\x2d170e\x2d4aba\x2da4ce\x2d0095dc4bfb6c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx6ssj.mount: Deactivated successfully. Dec 16 12:59:08.834395 systemd[1]: var-lib-kubelet-pods-7b1aa4d7\x2d170e\x2d4aba\x2da4ce\x2d0095dc4bfb6c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:59:08.835591 kubelet[2733]: I1216 12:59:08.835571 2733 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c" (UID: "7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:59:08.926839 kubelet[2733]: I1216 12:59:08.926797 2733 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 12:59:08.926839 kubelet[2733]: I1216 12:59:08.926831 2733 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 12:59:08.926839 kubelet[2733]: I1216 12:59:08.926840 2733 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6ssj\" (UniqueName: \"kubernetes.io/projected/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c-kube-api-access-x6ssj\") on node \"localhost\" DevicePath \"\"" Dec 16 12:59:09.439801 systemd[1]: Removed slice kubepods-besteffort-pod7b1aa4d7_170e_4aba_a4ce_0095dc4bfb6c.slice - libcontainer container kubepods-besteffort-pod7b1aa4d7_170e_4aba_a4ce_0095dc4bfb6c.slice. Dec 16 12:59:09.451758 kubelet[2733]: I1216 12:59:09.451279 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mvc4d" podStartSLOduration=2.358282383 podStartE2EDuration="20.451262075s" podCreationTimestamp="2025-12-16 12:58:49 +0000 UTC" firstStartedPulling="2025-12-16 12:58:50.353591568 +0000 UTC m=+20.519294338" lastFinishedPulling="2025-12-16 12:59:08.44657126 +0000 UTC m=+38.612274030" observedRunningTime="2025-12-16 12:59:09.450793634 +0000 UTC m=+39.616496405" watchObservedRunningTime="2025-12-16 12:59:09.451262075 +0000 UTC m=+39.616964845" Dec 16 12:59:09.513437 systemd[1]: Created slice kubepods-besteffort-pod65a51f87_62b9_4a77_b80b_0e907e9d663e.slice - libcontainer container kubepods-besteffort-pod65a51f87_62b9_4a77_b80b_0e907e9d663e.slice. Dec 16 12:59:09.533520 kubelet[2733]: I1216 12:59:09.532758 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65a51f87-62b9-4a77-b80b-0e907e9d663e-whisker-ca-bundle\") pod \"whisker-9c6786f75-2gcf6\" (UID: \"65a51f87-62b9-4a77-b80b-0e907e9d663e\") " pod="calico-system/whisker-9c6786f75-2gcf6" Dec 16 12:59:09.533520 kubelet[2733]: I1216 12:59:09.532840 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65a51f87-62b9-4a77-b80b-0e907e9d663e-whisker-backend-key-pair\") pod \"whisker-9c6786f75-2gcf6\" (UID: \"65a51f87-62b9-4a77-b80b-0e907e9d663e\") " pod="calico-system/whisker-9c6786f75-2gcf6" Dec 16 12:59:09.533520 kubelet[2733]: I1216 12:59:09.532867 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69gf9\" (UniqueName: \"kubernetes.io/projected/65a51f87-62b9-4a77-b80b-0e907e9d663e-kube-api-access-69gf9\") pod \"whisker-9c6786f75-2gcf6\" (UID: \"65a51f87-62b9-4a77-b80b-0e907e9d663e\") " pod="calico-system/whisker-9c6786f75-2gcf6" Dec 16 12:59:09.818977 containerd[1589]: time="2025-12-16T12:59:09.818830577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c6786f75-2gcf6,Uid:65a51f87-62b9-4a77-b80b-0e907e9d663e,Namespace:calico-system,Attempt:0,}" Dec 16 12:59:09.948873 kubelet[2733]: I1216 12:59:09.948829 2733 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c" path="/var/lib/kubelet/pods/7b1aa4d7-170e-4aba-a4ce-0095dc4bfb6c/volumes" Dec 16 12:59:09.992011 systemd-networkd[1477]: cali261bd03ef89: Link UP Dec 16 12:59:09.993254 systemd-networkd[1477]: cali261bd03ef89: Gained carrier Dec 16 12:59:10.032573 containerd[1589]: 2025-12-16 12:59:09.845 [INFO][3885] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:59:10.032573 containerd[1589]: 2025-12-16 12:59:09.866 [INFO][3885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--9c6786f75--2gcf6-eth0 whisker-9c6786f75- calico-system 65a51f87-62b9-4a77-b80b-0e907e9d663e 923 0 2025-12-16 12:59:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9c6786f75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-9c6786f75-2gcf6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali261bd03ef89 [] [] }} ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Namespace="calico-system" Pod="whisker-9c6786f75-2gcf6" WorkloadEndpoint="localhost-k8s-whisker--9c6786f75--2gcf6-" Dec 16 12:59:10.032573 containerd[1589]: 2025-12-16 12:59:09.866 [INFO][3885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Namespace="calico-system" Pod="whisker-9c6786f75-2gcf6" WorkloadEndpoint="localhost-k8s-whisker--9c6786f75--2gcf6-eth0" Dec 16 12:59:10.032573 containerd[1589]: 2025-12-16 12:59:09.934 [INFO][3899] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" HandleID="k8s-pod-network.923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Workload="localhost-k8s-whisker--9c6786f75--2gcf6-eth0" Dec 16 12:59:10.032924 containerd[1589]: 2025-12-16 12:59:09.936 [INFO][3899] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" HandleID="k8s-pod-network.923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Workload="localhost-k8s-whisker--9c6786f75--2gcf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044b750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-9c6786f75-2gcf6", "timestamp":"2025-12-16 12:59:09.934836726 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:59:10.032924 containerd[1589]: 2025-12-16 12:59:09.936 [INFO][3899] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:59:10.032924 containerd[1589]: 2025-12-16 12:59:09.936 [INFO][3899] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:59:10.032924 containerd[1589]: 2025-12-16 12:59:09.936 [INFO][3899] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:59:10.032924 containerd[1589]: 2025-12-16 12:59:09.945 [INFO][3899] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" host="localhost" Dec 16 12:59:10.032924 containerd[1589]: 2025-12-16 12:59:09.951 [INFO][3899] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:59:10.032924 containerd[1589]: 2025-12-16 12:59:09.955 [INFO][3899] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:59:10.032924 containerd[1589]: 2025-12-16 12:59:09.957 [INFO][3899] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:10.032924 containerd[1589]: 2025-12-16 12:59:09.959 [INFO][3899] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:10.032924 containerd[1589]: 2025-12-16 12:59:09.959 [INFO][3899] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" host="localhost" Dec 16 12:59:10.033228 containerd[1589]: 2025-12-16 12:59:09.960 [INFO][3899] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65 Dec 16 12:59:10.033228 containerd[1589]: 2025-12-16 12:59:09.963 [INFO][3899] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" host="localhost" Dec 16 12:59:10.033228 containerd[1589]: 2025-12-16 12:59:09.969 [INFO][3899] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" host="localhost" Dec 16 12:59:10.033228 containerd[1589]: 2025-12-16 12:59:09.969 [INFO][3899] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" host="localhost" Dec 16 12:59:10.033228 containerd[1589]: 2025-12-16 12:59:09.969 [INFO][3899] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:59:10.033228 containerd[1589]: 2025-12-16 12:59:09.969 [INFO][3899] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" HandleID="k8s-pod-network.923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Workload="localhost-k8s-whisker--9c6786f75--2gcf6-eth0" Dec 16 12:59:10.033402 containerd[1589]: 2025-12-16 12:59:09.978 [INFO][3885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Namespace="calico-system" Pod="whisker-9c6786f75-2gcf6" WorkloadEndpoint="localhost-k8s-whisker--9c6786f75--2gcf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9c6786f75--2gcf6-eth0", GenerateName:"whisker-9c6786f75-", Namespace:"calico-system", SelfLink:"", UID:"65a51f87-62b9-4a77-b80b-0e907e9d663e", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 59, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9c6786f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-9c6786f75-2gcf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali261bd03ef89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:10.033402 containerd[1589]: 2025-12-16 12:59:09.978 [INFO][3885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Namespace="calico-system" Pod="whisker-9c6786f75-2gcf6" WorkloadEndpoint="localhost-k8s-whisker--9c6786f75--2gcf6-eth0" Dec 16 12:59:10.033535 containerd[1589]: 2025-12-16 12:59:09.978 [INFO][3885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali261bd03ef89 ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Namespace="calico-system" Pod="whisker-9c6786f75-2gcf6" WorkloadEndpoint="localhost-k8s-whisker--9c6786f75--2gcf6-eth0" Dec 16 12:59:10.033535 containerd[1589]: 2025-12-16 12:59:09.997 [INFO][3885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Namespace="calico-system" Pod="whisker-9c6786f75-2gcf6" WorkloadEndpoint="localhost-k8s-whisker--9c6786f75--2gcf6-eth0" Dec 16 12:59:10.033598 containerd[1589]: 2025-12-16 12:59:09.997 [INFO][3885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Namespace="calico-system" Pod="whisker-9c6786f75-2gcf6" WorkloadEndpoint="localhost-k8s-whisker--9c6786f75--2gcf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9c6786f75--2gcf6-eth0", GenerateName:"whisker-9c6786f75-", Namespace:"calico-system", SelfLink:"", UID:"65a51f87-62b9-4a77-b80b-0e907e9d663e", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 59, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9c6786f75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65", Pod:"whisker-9c6786f75-2gcf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali261bd03ef89", MAC:"e2:f8:0c:5a:1f:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:10.033662 containerd[1589]: 2025-12-16 12:59:10.023 [INFO][3885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" Namespace="calico-system" Pod="whisker-9c6786f75-2gcf6" WorkloadEndpoint="localhost-k8s-whisker--9c6786f75--2gcf6-eth0" Dec 16 12:59:10.555699 systemd-networkd[1477]: vxlan.calico: Link UP Dec 16 12:59:10.555710 systemd-networkd[1477]: vxlan.calico: Gained carrier Dec 16 12:59:10.712072 containerd[1589]: time="2025-12-16T12:59:10.711970889Z" level=info msg="connecting to shim 923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65" address="unix:///run/containerd/s/b01772666640de2b6dbd7b6cb7a6d3197607a878c1fb918934757dbdaa7a916b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:10.746850 systemd[1]: Started cri-containerd-923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65.scope - libcontainer container 923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65. Dec 16 12:59:10.769772 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:59:10.990756 containerd[1589]: time="2025-12-16T12:59:10.990678954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c6786f75-2gcf6,Uid:65a51f87-62b9-4a77-b80b-0e907e9d663e,Namespace:calico-system,Attempt:0,} returns sandbox id \"923a99d7d79afa5d3be78d958c83d0129b44060b65df8ae4e4a650757c5efb65\"" Dec 16 12:59:10.999549 containerd[1589]: time="2025-12-16T12:59:10.999481486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:59:11.374282 containerd[1589]: time="2025-12-16T12:59:11.374220781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:11.375593 containerd[1589]: time="2025-12-16T12:59:11.375532513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:59:11.382715 systemd-networkd[1477]: cali261bd03ef89: Gained IPv6LL Dec 16 12:59:11.384884 containerd[1589]: time="2025-12-16T12:59:11.384833670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:59:11.385124 kubelet[2733]: E1216 12:59:11.385063 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:11.385467 kubelet[2733]: E1216 12:59:11.385135 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:11.388223 kubelet[2733]: E1216 12:59:11.388162 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:10f79adb9b2f48408b83dddbfc7d047e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-69gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9c6786f75-2gcf6_calico-system(65a51f87-62b9-4a77-b80b-0e907e9d663e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:11.390156 containerd[1589]: time="2025-12-16T12:59:11.390113615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:59:11.714898 containerd[1589]: time="2025-12-16T12:59:11.714717744Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:11.715997 containerd[1589]: time="2025-12-16T12:59:11.715953333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:59:11.716064 containerd[1589]: time="2025-12-16T12:59:11.716045787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:59:11.716275 kubelet[2733]: E1216 12:59:11.716222 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:11.716342 kubelet[2733]: E1216 12:59:11.716289 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:11.716550 kubelet[2733]: E1216 12:59:11.716452 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9c6786f75-2gcf6_calico-system(65a51f87-62b9-4a77-b80b-0e907e9d663e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:11.718632 kubelet[2733]: E1216 12:59:11.718563 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c6786f75-2gcf6" podUID="65a51f87-62b9-4a77-b80b-0e907e9d663e" Dec 16 12:59:12.150699 systemd-networkd[1477]: vxlan.calico: Gained IPv6LL Dec 16 12:59:12.450953 kubelet[2733]: E1216 12:59:12.450781 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c6786f75-2gcf6" podUID="65a51f87-62b9-4a77-b80b-0e907e9d663e" Dec 16 12:59:12.927452 containerd[1589]: time="2025-12-16T12:59:12.927405926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv88p,Uid:326be917-499c-401a-aae1-40840ab247ef,Namespace:calico-system,Attempt:0,}" Dec 16 12:59:13.059331 systemd-networkd[1477]: cali4a691fbe643: Link UP Dec 16 12:59:13.060222 systemd-networkd[1477]: cali4a691fbe643: Gained carrier Dec 16 12:59:13.077285 containerd[1589]: 2025-12-16 12:59:12.967 [INFO][4162] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jv88p-eth0 csi-node-driver- calico-system 326be917-499c-401a-aae1-40840ab247ef 739 0 2025-12-16 12:58:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jv88p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4a691fbe643 [] [] }} ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Namespace="calico-system" Pod="csi-node-driver-jv88p" WorkloadEndpoint="localhost-k8s-csi--node--driver--jv88p-" Dec 16 12:59:13.077285 containerd[1589]: 2025-12-16 12:59:12.967 [INFO][4162] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Namespace="calico-system" Pod="csi-node-driver-jv88p" WorkloadEndpoint="localhost-k8s-csi--node--driver--jv88p-eth0" Dec 16 12:59:13.077285 containerd[1589]: 2025-12-16 12:59:12.995 [INFO][4178] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" HandleID="k8s-pod-network.13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Workload="localhost-k8s-csi--node--driver--jv88p-eth0" Dec 16 12:59:13.077601 containerd[1589]: 2025-12-16 12:59:12.996 [INFO][4178] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" HandleID="k8s-pod-network.13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Workload="localhost-k8s-csi--node--driver--jv88p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004941a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jv88p", "timestamp":"2025-12-16 12:59:12.995952535 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:59:13.077601 containerd[1589]: 2025-12-16 12:59:12.996 [INFO][4178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:59:13.077601 containerd[1589]: 2025-12-16 12:59:12.996 [INFO][4178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:59:13.077601 containerd[1589]: 2025-12-16 12:59:12.996 [INFO][4178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:59:13.077601 containerd[1589]: 2025-12-16 12:59:13.008 [INFO][4178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" host="localhost" Dec 16 12:59:13.077601 containerd[1589]: 2025-12-16 12:59:13.021 [INFO][4178] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:59:13.077601 containerd[1589]: 2025-12-16 12:59:13.031 [INFO][4178] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:59:13.077601 containerd[1589]: 2025-12-16 12:59:13.038 [INFO][4178] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:13.077601 containerd[1589]: 2025-12-16 12:59:13.041 [INFO][4178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:13.077601 containerd[1589]: 2025-12-16 12:59:13.041 [INFO][4178] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" host="localhost" Dec 16 12:59:13.077822 containerd[1589]: 2025-12-16 12:59:13.042 [INFO][4178] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f Dec 16 12:59:13.077822 containerd[1589]: 2025-12-16 12:59:13.047 [INFO][4178] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" host="localhost" Dec 16 12:59:13.077822 containerd[1589]: 2025-12-16 12:59:13.053 [INFO][4178] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" host="localhost" Dec 16 12:59:13.077822 containerd[1589]: 2025-12-16 12:59:13.053 [INFO][4178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" host="localhost" Dec 16 12:59:13.077822 containerd[1589]: 2025-12-16 12:59:13.053 [INFO][4178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:59:13.077822 containerd[1589]: 2025-12-16 12:59:13.053 [INFO][4178] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" HandleID="k8s-pod-network.13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Workload="localhost-k8s-csi--node--driver--jv88p-eth0" Dec 16 12:59:13.077941 containerd[1589]: 2025-12-16 12:59:13.057 [INFO][4162] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Namespace="calico-system" Pod="csi-node-driver-jv88p" WorkloadEndpoint="localhost-k8s-csi--node--driver--jv88p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jv88p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"326be917-499c-401a-aae1-40840ab247ef", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jv88p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a691fbe643", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:13.077992 containerd[1589]: 2025-12-16 12:59:13.057 [INFO][4162] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Namespace="calico-system" Pod="csi-node-driver-jv88p" WorkloadEndpoint="localhost-k8s-csi--node--driver--jv88p-eth0" Dec 16 12:59:13.077992 containerd[1589]: 2025-12-16 12:59:13.057 [INFO][4162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a691fbe643 ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Namespace="calico-system" Pod="csi-node-driver-jv88p" WorkloadEndpoint="localhost-k8s-csi--node--driver--jv88p-eth0" Dec 16 12:59:13.077992 containerd[1589]: 2025-12-16 12:59:13.060 [INFO][4162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Namespace="calico-system" Pod="csi-node-driver-jv88p" WorkloadEndpoint="localhost-k8s-csi--node--driver--jv88p-eth0" Dec 16 12:59:13.078072 containerd[1589]: 2025-12-16 12:59:13.060 [INFO][4162] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Namespace="calico-system" Pod="csi-node-driver-jv88p" WorkloadEndpoint="localhost-k8s-csi--node--driver--jv88p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jv88p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"326be917-499c-401a-aae1-40840ab247ef", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f", Pod:"csi-node-driver-jv88p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a691fbe643", MAC:"76:ef:6d:7a:ee:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:13.078124 containerd[1589]: 2025-12-16 12:59:13.071 [INFO][4162] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" Namespace="calico-system" Pod="csi-node-driver-jv88p" WorkloadEndpoint="localhost-k8s-csi--node--driver--jv88p-eth0" Dec 16 12:59:13.107634 containerd[1589]: time="2025-12-16T12:59:13.107589324Z" level=info msg="connecting to shim 13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f" address="unix:///run/containerd/s/fbb56de16b2d456a496e17bd235835af9a463ea7b237315f9cd8531af76afdfe" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:13.145639 systemd[1]: Started cri-containerd-13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f.scope - libcontainer container 13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f. Dec 16 12:59:13.158301 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:59:13.177800 containerd[1589]: time="2025-12-16T12:59:13.177533855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jv88p,Uid:326be917-499c-401a-aae1-40840ab247ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"13f96aaca68bb8c6fd53b9a1186acc6b842b9ddb1669607c415f762c5dcdd52f\"" Dec 16 12:59:13.181877 containerd[1589]: time="2025-12-16T12:59:13.181845621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:59:13.305189 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:46666.service - OpenSSH per-connection server daemon (10.0.0.1:46666). Dec 16 12:59:13.378667 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 46666 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:13.381171 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:13.389296 systemd-logind[1561]: New session 8 of user core. Dec 16 12:59:13.395888 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:59:13.515988 containerd[1589]: time="2025-12-16T12:59:13.515833926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:13.517628 containerd[1589]: time="2025-12-16T12:59:13.517092709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:59:13.517628 containerd[1589]: time="2025-12-16T12:59:13.517135389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:59:13.517714 kubelet[2733]: E1216 12:59:13.517556 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:13.517714 kubelet[2733]: E1216 12:59:13.517621 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:13.518099 kubelet[2733]: E1216 12:59:13.517921 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlz62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv88p_calico-system(326be917-499c-401a-aae1-40840ab247ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:13.520833 containerd[1589]: time="2025-12-16T12:59:13.520797194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:59:13.539198 sshd[4249]: Connection closed by 10.0.0.1 port 46666 Dec 16 12:59:13.539549 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:13.543201 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:46666.service: Deactivated successfully. Dec 16 12:59:13.545111 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:59:13.545896 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:59:13.547091 systemd-logind[1561]: Removed session 8. Dec 16 12:59:13.844053 containerd[1589]: time="2025-12-16T12:59:13.843916353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:13.898068 containerd[1589]: time="2025-12-16T12:59:13.897984426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:59:13.898068 containerd[1589]: time="2025-12-16T12:59:13.898052023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:59:13.898400 kubelet[2733]: E1216 12:59:13.898337 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:13.898473 kubelet[2733]: E1216 12:59:13.898415 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:13.898642 kubelet[2733]: E1216 12:59:13.898593 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlz62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv88p_calico-system(326be917-499c-401a-aae1-40840ab247ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:13.900645 kubelet[2733]: E1216 12:59:13.900569 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:59:13.928813 containerd[1589]: time="2025-12-16T12:59:13.928759817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pmgkj,Uid:4d76e126-0792-4c26-bff3-5168fb16b8b9,Namespace:kube-system,Attempt:0,}" Dec 16 12:59:13.929313 containerd[1589]: time="2025-12-16T12:59:13.928758525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b9b958b-znds7,Uid:63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53,Namespace:calico-system,Attempt:0,}" Dec 16 12:59:14.042873 systemd-networkd[1477]: cali5032fce735d: Link UP Dec 16 12:59:14.044007 systemd-networkd[1477]: cali5032fce735d: Gained carrier Dec 16 12:59:14.056519 containerd[1589]: 2025-12-16 12:59:13.976 [INFO][4282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0 calico-kube-controllers-8b9b958b- calico-system 63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53 857 0 2025-12-16 12:58:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8b9b958b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8b9b958b-znds7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5032fce735d [] [] }} ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Namespace="calico-system" Pod="calico-kube-controllers-8b9b958b-znds7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-" Dec 16 12:59:14.056519 containerd[1589]: 2025-12-16 12:59:13.976 [INFO][4282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Namespace="calico-system" Pod="calico-kube-controllers-8b9b958b-znds7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0" Dec 16 12:59:14.056519 containerd[1589]: 2025-12-16 12:59:14.004 [INFO][4301] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" HandleID="k8s-pod-network.6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Workload="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0" Dec 16 12:59:14.056689 containerd[1589]: 2025-12-16 12:59:14.004 [INFO][4301] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" HandleID="k8s-pod-network.6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Workload="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8b9b958b-znds7", "timestamp":"2025-12-16 12:59:14.004597542 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:59:14.056689 containerd[1589]: 2025-12-16 12:59:14.004 [INFO][4301] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:59:14.056689 containerd[1589]: 2025-12-16 12:59:14.004 [INFO][4301] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:59:14.056689 containerd[1589]: 2025-12-16 12:59:14.004 [INFO][4301] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:59:14.056689 containerd[1589]: 2025-12-16 12:59:14.011 [INFO][4301] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" host="localhost" Dec 16 12:59:14.056689 containerd[1589]: 2025-12-16 12:59:14.015 [INFO][4301] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:59:14.056689 containerd[1589]: 2025-12-16 12:59:14.019 [INFO][4301] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:59:14.056689 containerd[1589]: 2025-12-16 12:59:14.021 [INFO][4301] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:14.056689 containerd[1589]: 2025-12-16 12:59:14.023 [INFO][4301] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:14.056689 containerd[1589]: 2025-12-16 12:59:14.023 [INFO][4301] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" host="localhost" Dec 16 12:59:14.057008 containerd[1589]: 2025-12-16 12:59:14.025 [INFO][4301] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb Dec 16 12:59:14.057008 containerd[1589]: 2025-12-16 12:59:14.028 [INFO][4301] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" host="localhost" Dec 16 12:59:14.057008 containerd[1589]: 2025-12-16 12:59:14.035 [INFO][4301] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" host="localhost" Dec 16 12:59:14.057008 containerd[1589]: 2025-12-16 12:59:14.035 [INFO][4301] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" host="localhost" Dec 16 12:59:14.057008 containerd[1589]: 2025-12-16 12:59:14.035 [INFO][4301] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:59:14.057008 containerd[1589]: 2025-12-16 12:59:14.035 [INFO][4301] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" HandleID="k8s-pod-network.6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Workload="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0" Dec 16 12:59:14.057158 containerd[1589]: 2025-12-16 12:59:14.039 [INFO][4282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Namespace="calico-system" Pod="calico-kube-controllers-8b9b958b-znds7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0", GenerateName:"calico-kube-controllers-8b9b958b-", Namespace:"calico-system", SelfLink:"", UID:"63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b9b958b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8b9b958b-znds7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5032fce735d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:14.057215 containerd[1589]: 2025-12-16 12:59:14.039 [INFO][4282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Namespace="calico-system" Pod="calico-kube-controllers-8b9b958b-znds7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0" Dec 16 12:59:14.057215 containerd[1589]: 2025-12-16 12:59:14.039 [INFO][4282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5032fce735d ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Namespace="calico-system" Pod="calico-kube-controllers-8b9b958b-znds7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0" Dec 16 12:59:14.057215 containerd[1589]: 2025-12-16 12:59:14.043 [INFO][4282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Namespace="calico-system" Pod="calico-kube-controllers-8b9b958b-znds7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0" Dec 16 12:59:14.057280 containerd[1589]: 2025-12-16 12:59:14.043 [INFO][4282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Namespace="calico-system" Pod="calico-kube-controllers-8b9b958b-znds7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0", GenerateName:"calico-kube-controllers-8b9b958b-", Namespace:"calico-system", SelfLink:"", UID:"63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b9b958b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb", Pod:"calico-kube-controllers-8b9b958b-znds7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5032fce735d", MAC:"82:d8:df:d8:0a:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:14.057331 containerd[1589]: 2025-12-16 12:59:14.051 [INFO][4282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" Namespace="calico-system" Pod="calico-kube-controllers-8b9b958b-znds7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b9b958b--znds7-eth0" Dec 16 12:59:14.080446 containerd[1589]: time="2025-12-16T12:59:14.080406772Z" level=info msg="connecting to shim 6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb" address="unix:///run/containerd/s/f445c091ec8bf23ee570d6898fa6b12f675379cd17368cbf02a8df749aeaa385" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:14.112646 systemd[1]: Started cri-containerd-6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb.scope - libcontainer container 6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb. Dec 16 12:59:14.133592 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:59:14.156061 systemd-networkd[1477]: cali5f58650021f: Link UP Dec 16 12:59:14.156350 systemd-networkd[1477]: cali5f58650021f: Gained carrier Dec 16 12:59:14.171118 containerd[1589]: 2025-12-16 12:59:13.970 [INFO][4270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0 coredns-674b8bbfcf- kube-system 4d76e126-0792-4c26-bff3-5168fb16b8b9 853 0 2025-12-16 12:58:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-pmgkj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f58650021f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Namespace="kube-system" Pod="coredns-674b8bbfcf-pmgkj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pmgkj-" Dec 16 12:59:14.171118 containerd[1589]: 2025-12-16 12:59:13.972 [INFO][4270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Namespace="kube-system" Pod="coredns-674b8bbfcf-pmgkj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0" Dec 16 12:59:14.171118 containerd[1589]: 2025-12-16 12:59:14.008 [INFO][4299] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" HandleID="k8s-pod-network.51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Workload="localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0" Dec 16 12:59:14.171561 containerd[1589]: 2025-12-16 12:59:14.008 [INFO][4299] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" HandleID="k8s-pod-network.51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Workload="localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-pmgkj", "timestamp":"2025-12-16 12:59:14.008040867 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:59:14.171561 containerd[1589]: 2025-12-16 12:59:14.008 [INFO][4299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:59:14.171561 containerd[1589]: 2025-12-16 12:59:14.035 [INFO][4299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:59:14.171561 containerd[1589]: 2025-12-16 12:59:14.035 [INFO][4299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:59:14.171561 containerd[1589]: 2025-12-16 12:59:14.113 [INFO][4299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" host="localhost" Dec 16 12:59:14.171561 containerd[1589]: 2025-12-16 12:59:14.119 [INFO][4299] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:59:14.171561 containerd[1589]: 2025-12-16 12:59:14.123 [INFO][4299] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:59:14.171561 containerd[1589]: 2025-12-16 12:59:14.125 [INFO][4299] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:14.171561 containerd[1589]: 2025-12-16 12:59:14.127 [INFO][4299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:14.171561 containerd[1589]: 2025-12-16 12:59:14.127 [INFO][4299] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" host="localhost" Dec 16 12:59:14.172535 containerd[1589]: 2025-12-16 12:59:14.129 [INFO][4299] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695 Dec 16 12:59:14.172535 containerd[1589]: 2025-12-16 12:59:14.136 [INFO][4299] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" host="localhost" Dec 16 12:59:14.172535 containerd[1589]: 2025-12-16 12:59:14.146 [INFO][4299] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" host="localhost" Dec 16 12:59:14.172535 containerd[1589]: 2025-12-16 12:59:14.146 [INFO][4299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" host="localhost" Dec 16 12:59:14.172535 containerd[1589]: 2025-12-16 12:59:14.146 [INFO][4299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:59:14.172535 containerd[1589]: 2025-12-16 12:59:14.146 [INFO][4299] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" HandleID="k8s-pod-network.51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Workload="localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0" Dec 16 12:59:14.172728 containerd[1589]: 2025-12-16 12:59:14.152 [INFO][4270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Namespace="kube-system" Pod="coredns-674b8bbfcf-pmgkj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4d76e126-0792-4c26-bff3-5168fb16b8b9", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-pmgkj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f58650021f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:14.172812 containerd[1589]: 2025-12-16 12:59:14.152 [INFO][4270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Namespace="kube-system" Pod="coredns-674b8bbfcf-pmgkj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0" Dec 16 12:59:14.172812 containerd[1589]: 2025-12-16 12:59:14.152 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f58650021f ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Namespace="kube-system" Pod="coredns-674b8bbfcf-pmgkj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0" Dec 16 12:59:14.172812 containerd[1589]: 2025-12-16 12:59:14.155 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Namespace="kube-system" Pod="coredns-674b8bbfcf-pmgkj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0" Dec 16 12:59:14.173030 containerd[1589]: 2025-12-16 12:59:14.156 [INFO][4270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Namespace="kube-system" Pod="coredns-674b8bbfcf-pmgkj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4d76e126-0792-4c26-bff3-5168fb16b8b9", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695", Pod:"coredns-674b8bbfcf-pmgkj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f58650021f", MAC:"e6:97:33:4a:38:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:14.173030 containerd[1589]: 2025-12-16 12:59:14.166 [INFO][4270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" Namespace="kube-system" Pod="coredns-674b8bbfcf-pmgkj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pmgkj-eth0" Dec 16 12:59:14.194665 containerd[1589]: time="2025-12-16T12:59:14.194591998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b9b958b-znds7,Uid:63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53,Namespace:calico-system,Attempt:0,} returns sandbox id \"6323797014aad748c6a270c9788628e3576e0084a419dc8cf837d7ecbc9829eb\"" Dec 16 12:59:14.197438 containerd[1589]: time="2025-12-16T12:59:14.197396983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:59:14.215266 containerd[1589]: time="2025-12-16T12:59:14.215209995Z" level=info msg="connecting to shim 51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695" address="unix:///run/containerd/s/ce700ac26a9173a487cf2f1a652bcfa4b06be6c5c068abfc94ba20cca3a02825" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:14.245792 systemd[1]: Started cri-containerd-51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695.scope - libcontainer container 51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695. Dec 16 12:59:14.266474 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:59:14.297800 containerd[1589]: time="2025-12-16T12:59:14.297756765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pmgkj,Uid:4d76e126-0792-4c26-bff3-5168fb16b8b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695\"" Dec 16 12:59:14.304241 containerd[1589]: time="2025-12-16T12:59:14.304186195Z" level=info msg="CreateContainer within sandbox \"51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:59:14.344857 containerd[1589]: time="2025-12-16T12:59:14.344797876Z" level=info msg="Container 6064033555253d478ad04cd77d3620d52311ecf1437d1d8831a88983d05b47a4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:59:14.353937 containerd[1589]: time="2025-12-16T12:59:14.353891960Z" level=info msg="CreateContainer within sandbox \"51d6ae19f0906a7fa1946eee16ef166481ff1e11bdd649387ad8d097686d0695\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6064033555253d478ad04cd77d3620d52311ecf1437d1d8831a88983d05b47a4\"" Dec 16 12:59:14.354451 containerd[1589]: time="2025-12-16T12:59:14.354414050Z" level=info msg="StartContainer for \"6064033555253d478ad04cd77d3620d52311ecf1437d1d8831a88983d05b47a4\"" Dec 16 12:59:14.355509 containerd[1589]: time="2025-12-16T12:59:14.355470833Z" level=info msg="connecting to shim 6064033555253d478ad04cd77d3620d52311ecf1437d1d8831a88983d05b47a4" address="unix:///run/containerd/s/ce700ac26a9173a487cf2f1a652bcfa4b06be6c5c068abfc94ba20cca3a02825" protocol=ttrpc version=3 Dec 16 12:59:14.379772 systemd[1]: Started cri-containerd-6064033555253d478ad04cd77d3620d52311ecf1437d1d8831a88983d05b47a4.scope - libcontainer container 6064033555253d478ad04cd77d3620d52311ecf1437d1d8831a88983d05b47a4. Dec 16 12:59:14.424116 containerd[1589]: time="2025-12-16T12:59:14.423940966Z" level=info msg="StartContainer for \"6064033555253d478ad04cd77d3620d52311ecf1437d1d8831a88983d05b47a4\" returns successfully" Dec 16 12:59:14.464806 kubelet[2733]: E1216 12:59:14.464745 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:59:14.474746 kubelet[2733]: I1216 12:59:14.474664 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pmgkj" podStartSLOduration=39.474637531 podStartE2EDuration="39.474637531s" podCreationTimestamp="2025-12-16 12:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:59:14.473215462 +0000 UTC m=+44.638918232" watchObservedRunningTime="2025-12-16 12:59:14.474637531 +0000 UTC m=+44.640340301" Dec 16 12:59:14.554603 containerd[1589]: time="2025-12-16T12:59:14.554537471Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:14.555929 containerd[1589]: time="2025-12-16T12:59:14.555896442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:59:14.556018 containerd[1589]: time="2025-12-16T12:59:14.555979067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:59:14.556225 kubelet[2733]: E1216 12:59:14.556174 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:14.556671 kubelet[2733]: E1216 12:59:14.556227 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:14.556671 kubelet[2733]: E1216 12:59:14.556396 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b25q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8b9b958b-znds7_calico-system(63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:14.557851 kubelet[2733]: E1216 12:59:14.557788 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8b9b958b-znds7" podUID="63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53" Dec 16 12:59:14.927202 containerd[1589]: time="2025-12-16T12:59:14.927140938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nqc85,Uid:ae19310b-b8cf-4899-a55a-6c349ce6f20f,Namespace:kube-system,Attempt:0,}" Dec 16 12:59:15.020200 systemd-networkd[1477]: cali14929ac3018: Link UP Dec 16 12:59:15.020874 systemd-networkd[1477]: cali14929ac3018: Gained carrier Dec 16 12:59:15.031630 systemd-networkd[1477]: cali4a691fbe643: Gained IPv6LL Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:14.963 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nqc85-eth0 coredns-674b8bbfcf- kube-system ae19310b-b8cf-4899-a55a-6c349ce6f20f 852 0 2025-12-16 12:58:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nqc85 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali14929ac3018 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqc85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqc85-" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:14.963 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqc85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqc85-eth0" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:14.986 [INFO][4474] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" HandleID="k8s-pod-network.21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Workload="localhost-k8s-coredns--674b8bbfcf--nqc85-eth0" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:14.986 [INFO][4474] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" HandleID="k8s-pod-network.21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Workload="localhost-k8s-coredns--674b8bbfcf--nqc85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nqc85", "timestamp":"2025-12-16 12:59:14.986390259 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:14.986 [INFO][4474] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:14.986 [INFO][4474] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:14.986 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:14.992 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" host="localhost" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:14.996 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:15.002 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:15.003 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:15.005 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:15.005 [INFO][4474] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" host="localhost" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:15.006 [INFO][4474] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518 Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:15.010 [INFO][4474] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" host="localhost" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:15.014 [INFO][4474] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" host="localhost" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:15.014 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" host="localhost" Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:15.014 [INFO][4474] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:59:15.039731 containerd[1589]: 2025-12-16 12:59:15.014 [INFO][4474] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" HandleID="k8s-pod-network.21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Workload="localhost-k8s-coredns--674b8bbfcf--nqc85-eth0" Dec 16 12:59:15.040532 containerd[1589]: 2025-12-16 12:59:15.017 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqc85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqc85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nqc85-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ae19310b-b8cf-4899-a55a-6c349ce6f20f", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nqc85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14929ac3018", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:15.040532 containerd[1589]: 2025-12-16 12:59:15.018 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqc85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqc85-eth0" Dec 16 12:59:15.040532 containerd[1589]: 2025-12-16 12:59:15.018 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14929ac3018 ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqc85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqc85-eth0" Dec 16 12:59:15.040532 containerd[1589]: 2025-12-16 12:59:15.020 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqc85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqc85-eth0" Dec 16 12:59:15.040532 containerd[1589]: 2025-12-16 12:59:15.021 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqc85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqc85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nqc85-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ae19310b-b8cf-4899-a55a-6c349ce6f20f", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518", Pod:"coredns-674b8bbfcf-nqc85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14929ac3018", MAC:"36:06:dd:1f:e2:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:15.040532 containerd[1589]: 2025-12-16 12:59:15.036 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" Namespace="kube-system" Pod="coredns-674b8bbfcf-nqc85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nqc85-eth0" Dec 16 12:59:15.063030 containerd[1589]: time="2025-12-16T12:59:15.062967903Z" level=info msg="connecting to shim 21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518" address="unix:///run/containerd/s/5b877eb7ec4de0d420c845a880535803c3ff659978acf9673ed2794149bb60cd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:15.092658 systemd[1]: Started cri-containerd-21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518.scope - libcontainer container 21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518. Dec 16 12:59:15.106040 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:59:15.136844 containerd[1589]: time="2025-12-16T12:59:15.136798758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nqc85,Uid:ae19310b-b8cf-4899-a55a-6c349ce6f20f,Namespace:kube-system,Attempt:0,} returns sandbox id \"21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518\"" Dec 16 12:59:15.142316 containerd[1589]: time="2025-12-16T12:59:15.142274718Z" level=info msg="CreateContainer within sandbox \"21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:59:15.151919 containerd[1589]: time="2025-12-16T12:59:15.151876433Z" level=info msg="Container f60191621a0d1f049af6837b435508155eb65f449031e39f293189f8b73b04be: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:59:15.159177 containerd[1589]: time="2025-12-16T12:59:15.159140649Z" level=info msg="CreateContainer within sandbox \"21bee76c969f021973a754bd8ad011c0c50b9602903c967aa521c5febc495518\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f60191621a0d1f049af6837b435508155eb65f449031e39f293189f8b73b04be\"" Dec 16 12:59:15.159690 containerd[1589]: time="2025-12-16T12:59:15.159654454Z" level=info msg="StartContainer for \"f60191621a0d1f049af6837b435508155eb65f449031e39f293189f8b73b04be\"" Dec 16 12:59:15.160614 containerd[1589]: time="2025-12-16T12:59:15.160590020Z" level=info msg="connecting to shim f60191621a0d1f049af6837b435508155eb65f449031e39f293189f8b73b04be" address="unix:///run/containerd/s/5b877eb7ec4de0d420c845a880535803c3ff659978acf9673ed2794149bb60cd" protocol=ttrpc version=3 Dec 16 12:59:15.184648 systemd[1]: Started cri-containerd-f60191621a0d1f049af6837b435508155eb65f449031e39f293189f8b73b04be.scope - libcontainer container f60191621a0d1f049af6837b435508155eb65f449031e39f293189f8b73b04be. Dec 16 12:59:15.215277 containerd[1589]: time="2025-12-16T12:59:15.215225202Z" level=info msg="StartContainer for \"f60191621a0d1f049af6837b435508155eb65f449031e39f293189f8b73b04be\" returns successfully" Dec 16 12:59:15.286698 systemd-networkd[1477]: cali5f58650021f: Gained IPv6LL Dec 16 12:59:15.350661 systemd-networkd[1477]: cali5032fce735d: Gained IPv6LL Dec 16 12:59:15.465802 kubelet[2733]: E1216 12:59:15.465604 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8b9b958b-znds7" podUID="63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53" Dec 16 12:59:15.506095 kubelet[2733]: I1216 12:59:15.506026 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nqc85" podStartSLOduration=40.506007328 podStartE2EDuration="40.506007328s" podCreationTimestamp="2025-12-16 12:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:59:15.505927248 +0000 UTC m=+45.671630038" watchObservedRunningTime="2025-12-16 12:59:15.506007328 +0000 UTC m=+45.671710098" Dec 16 12:59:15.927825 containerd[1589]: time="2025-12-16T12:59:15.927680653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-44tcf,Uid:4557e1c6-8a81-439a-bd50-8cd81381c4a7,Namespace:calico-system,Attempt:0,}" Dec 16 12:59:15.927825 containerd[1589]: time="2025-12-16T12:59:15.927722863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766db845cb-vmh89,Uid:14fa36c4-6467-4e58-88b1-8675a1ddf3eb,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:59:16.046234 systemd-networkd[1477]: calia52e264cd1f: Link UP Dec 16 12:59:16.047116 systemd-networkd[1477]: calia52e264cd1f: Gained carrier Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:15.978 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0 calico-apiserver-766db845cb- calico-apiserver 14fa36c4-6467-4e58-88b1-8675a1ddf3eb 856 0 2025-12-16 12:58:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:766db845cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-766db845cb-vmh89 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia52e264cd1f [] [] }} ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-vmh89" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--vmh89-" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:15.978 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-vmh89" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.004 [INFO][4616] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" HandleID="k8s-pod-network.a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Workload="localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.004 [INFO][4616] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" HandleID="k8s-pod-network.a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Workload="localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-766db845cb-vmh89", "timestamp":"2025-12-16 12:59:16.004467217 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.004 [INFO][4616] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.004 [INFO][4616] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.004 [INFO][4616] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.015 [INFO][4616] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" host="localhost" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.019 [INFO][4616] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.023 [INFO][4616] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.025 [INFO][4616] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.027 [INFO][4616] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.027 [INFO][4616] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" host="localhost" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.029 [INFO][4616] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0 Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.033 [INFO][4616] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" host="localhost" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.039 [INFO][4616] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" host="localhost" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.039 [INFO][4616] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" host="localhost" Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.039 [INFO][4616] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:59:16.061121 containerd[1589]: 2025-12-16 12:59:16.039 [INFO][4616] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" HandleID="k8s-pod-network.a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Workload="localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0" Dec 16 12:59:16.062102 containerd[1589]: 2025-12-16 12:59:16.042 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-vmh89" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0", GenerateName:"calico-apiserver-766db845cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"14fa36c4-6467-4e58-88b1-8675a1ddf3eb", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766db845cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-766db845cb-vmh89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia52e264cd1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:16.062102 containerd[1589]: 2025-12-16 12:59:16.042 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-vmh89" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0" Dec 16 12:59:16.062102 containerd[1589]: 2025-12-16 12:59:16.042 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia52e264cd1f ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-vmh89" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0" Dec 16 12:59:16.062102 containerd[1589]: 2025-12-16 12:59:16.046 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-vmh89" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0" Dec 16 12:59:16.062102 containerd[1589]: 2025-12-16 12:59:16.047 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-vmh89" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0", GenerateName:"calico-apiserver-766db845cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"14fa36c4-6467-4e58-88b1-8675a1ddf3eb", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766db845cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0", Pod:"calico-apiserver-766db845cb-vmh89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia52e264cd1f", MAC:"06:8d:f9:e2:07:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:16.062102 containerd[1589]: 2025-12-16 12:59:16.057 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-vmh89" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--vmh89-eth0" Dec 16 12:59:16.087466 containerd[1589]: time="2025-12-16T12:59:16.087415166Z" level=info msg="connecting to shim a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0" address="unix:///run/containerd/s/89326d5b273743597b9cdcdd8fa4576ed92abbe91a074e71b05b8aacb1a59261" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:16.114662 systemd[1]: Started cri-containerd-a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0.scope - libcontainer container a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0. Dec 16 12:59:16.131004 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:59:16.152130 systemd-networkd[1477]: cali4d107c5dd5a: Link UP Dec 16 12:59:16.153812 systemd-networkd[1477]: cali4d107c5dd5a: Gained carrier Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:15.971 [INFO][4584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--44tcf-eth0 goldmane-666569f655- calico-system 4557e1c6-8a81-439a-bd50-8cd81381c4a7 855 0 2025-12-16 12:58:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-44tcf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4d107c5dd5a [] [] }} ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Namespace="calico-system" Pod="goldmane-666569f655-44tcf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--44tcf-" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:15.971 [INFO][4584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Namespace="calico-system" Pod="goldmane-666569f655-44tcf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--44tcf-eth0" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.006 [INFO][4611] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" HandleID="k8s-pod-network.723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Workload="localhost-k8s-goldmane--666569f655--44tcf-eth0" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.006 [INFO][4611] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" HandleID="k8s-pod-network.723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Workload="localhost-k8s-goldmane--666569f655--44tcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-44tcf", "timestamp":"2025-12-16 12:59:16.006268619 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.006 [INFO][4611] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.039 [INFO][4611] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.039 [INFO][4611] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.116 [INFO][4611] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" host="localhost" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.125 [INFO][4611] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.129 [INFO][4611] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.131 [INFO][4611] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.134 [INFO][4611] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.134 [INFO][4611] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" host="localhost" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.135 [INFO][4611] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.139 [INFO][4611] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" host="localhost" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.145 [INFO][4611] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" host="localhost" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.145 [INFO][4611] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" host="localhost" Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.145 [INFO][4611] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:59:16.167315 containerd[1589]: 2025-12-16 12:59:16.145 [INFO][4611] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" HandleID="k8s-pod-network.723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Workload="localhost-k8s-goldmane--666569f655--44tcf-eth0" Dec 16 12:59:16.168599 containerd[1589]: 2025-12-16 12:59:16.149 [INFO][4584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Namespace="calico-system" Pod="goldmane-666569f655-44tcf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--44tcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--44tcf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4557e1c6-8a81-439a-bd50-8cd81381c4a7", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-44tcf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4d107c5dd5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:16.168599 containerd[1589]: 2025-12-16 12:59:16.149 [INFO][4584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Namespace="calico-system" Pod="goldmane-666569f655-44tcf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--44tcf-eth0" Dec 16 12:59:16.168599 containerd[1589]: 2025-12-16 12:59:16.149 [INFO][4584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d107c5dd5a ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Namespace="calico-system" Pod="goldmane-666569f655-44tcf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--44tcf-eth0" Dec 16 12:59:16.168599 containerd[1589]: 2025-12-16 12:59:16.153 [INFO][4584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Namespace="calico-system" Pod="goldmane-666569f655-44tcf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--44tcf-eth0" Dec 16 12:59:16.168599 containerd[1589]: 2025-12-16 12:59:16.154 [INFO][4584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Namespace="calico-system" Pod="goldmane-666569f655-44tcf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--44tcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--44tcf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4557e1c6-8a81-439a-bd50-8cd81381c4a7", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba", Pod:"goldmane-666569f655-44tcf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4d107c5dd5a", MAC:"46:cb:0c:d5:6b:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:16.168599 containerd[1589]: 2025-12-16 12:59:16.163 [INFO][4584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" Namespace="calico-system" Pod="goldmane-666569f655-44tcf" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--44tcf-eth0" Dec 16 12:59:16.187301 containerd[1589]: time="2025-12-16T12:59:16.187156557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766db845cb-vmh89,Uid:14fa36c4-6467-4e58-88b1-8675a1ddf3eb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a860883905c9a586d6f9c951aaa9ac22d4cbbfe5b01a6b091b5af588764871c0\"" Dec 16 12:59:16.190582 containerd[1589]: time="2025-12-16T12:59:16.190535299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:16.196223 containerd[1589]: time="2025-12-16T12:59:16.196177730Z" level=info msg="connecting to shim 723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba" address="unix:///run/containerd/s/d8c4d3cb0896c282ff1cca9e05af6aa80793f97d7a2b4ed36de11d7344927d6e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:16.229861 systemd[1]: Started cri-containerd-723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba.scope - libcontainer container 723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba. Dec 16 12:59:16.245746 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:59:16.282680 containerd[1589]: time="2025-12-16T12:59:16.282627933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-44tcf,Uid:4557e1c6-8a81-439a-bd50-8cd81381c4a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"723fe041ecf67469fad76e5183871f6dbc0b9998ed7f9bee6d411feeb434b5ba\"" Dec 16 12:59:16.548862 containerd[1589]: time="2025-12-16T12:59:16.548704476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:16.549933 containerd[1589]: time="2025-12-16T12:59:16.549882567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:16.549933 containerd[1589]: time="2025-12-16T12:59:16.549924266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:59:16.550267 kubelet[2733]: E1216 12:59:16.550195 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:16.550866 kubelet[2733]: E1216 12:59:16.550272 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:16.550866 kubelet[2733]: E1216 12:59:16.550617 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmr7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766db845cb-vmh89_calico-apiserver(14fa36c4-6467-4e58-88b1-8675a1ddf3eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:16.551060 containerd[1589]: time="2025-12-16T12:59:16.550943469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:59:16.552035 kubelet[2733]: E1216 12:59:16.551954 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766db845cb-vmh89" podUID="14fa36c4-6467-4e58-88b1-8675a1ddf3eb" Dec 16 12:59:16.822714 systemd-networkd[1477]: cali14929ac3018: Gained IPv6LL Dec 16 12:59:16.892331 containerd[1589]: time="2025-12-16T12:59:16.892249191Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:16.893556 containerd[1589]: time="2025-12-16T12:59:16.893489751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:59:16.893710 containerd[1589]: time="2025-12-16T12:59:16.893568027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:59:16.893789 kubelet[2733]: E1216 12:59:16.893741 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:16.893850 kubelet[2733]: E1216 12:59:16.893801 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:16.894070 kubelet[2733]: E1216 12:59:16.893987 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvnhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-44tcf_calico-system(4557e1c6-8a81-439a-bd50-8cd81381c4a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:16.895215 kubelet[2733]: E1216 12:59:16.895182 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-44tcf" podUID="4557e1c6-8a81-439a-bd50-8cd81381c4a7" Dec 16 12:59:16.927853 containerd[1589]: time="2025-12-16T12:59:16.927795855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766db845cb-wm7wl,Uid:5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:59:17.032645 systemd-networkd[1477]: calif0a00b596ee: Link UP Dec 16 12:59:17.032888 systemd-networkd[1477]: calif0a00b596ee: Gained carrier Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:16.963 [INFO][4743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0 calico-apiserver-766db845cb- calico-apiserver 5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9 858 0 2025-12-16 12:58:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:766db845cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-766db845cb-wm7wl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif0a00b596ee [] [] }} ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-wm7wl" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--wm7wl-" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:16.963 [INFO][4743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-wm7wl" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:16.993 [INFO][4756] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" HandleID="k8s-pod-network.42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Workload="localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:16.993 [INFO][4756] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" HandleID="k8s-pod-network.42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Workload="localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-766db845cb-wm7wl", "timestamp":"2025-12-16 12:59:16.993663894 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:16.993 [INFO][4756] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:16.993 [INFO][4756] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:16.993 [INFO][4756] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.000 [INFO][4756] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" host="localhost" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.006 [INFO][4756] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.010 [INFO][4756] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.012 [INFO][4756] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.014 [INFO][4756] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.014 [INFO][4756] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" host="localhost" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.015 [INFO][4756] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8 Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.019 [INFO][4756] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" host="localhost" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.025 [INFO][4756] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" host="localhost" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.026 [INFO][4756] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" host="localhost" Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.026 [INFO][4756] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:59:17.055002 containerd[1589]: 2025-12-16 12:59:17.026 [INFO][4756] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" HandleID="k8s-pod-network.42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Workload="localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0" Dec 16 12:59:17.055596 containerd[1589]: 2025-12-16 12:59:17.029 [INFO][4743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-wm7wl" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0", GenerateName:"calico-apiserver-766db845cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766db845cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-766db845cb-wm7wl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0a00b596ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:17.055596 containerd[1589]: 2025-12-16 12:59:17.029 [INFO][4743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-wm7wl" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0" Dec 16 12:59:17.055596 containerd[1589]: 2025-12-16 12:59:17.029 [INFO][4743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0a00b596ee ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-wm7wl" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0" Dec 16 12:59:17.055596 containerd[1589]: 2025-12-16 12:59:17.032 [INFO][4743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-wm7wl" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0" Dec 16 12:59:17.055596 containerd[1589]: 2025-12-16 12:59:17.033 [INFO][4743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-wm7wl" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0", GenerateName:"calico-apiserver-766db845cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766db845cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8", Pod:"calico-apiserver-766db845cb-wm7wl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0a00b596ee", MAC:"f6:2a:7a:37:1b:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:59:17.055596 containerd[1589]: 2025-12-16 12:59:17.048 [INFO][4743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" Namespace="calico-apiserver" Pod="calico-apiserver-766db845cb-wm7wl" WorkloadEndpoint="localhost-k8s-calico--apiserver--766db845cb--wm7wl-eth0" Dec 16 12:59:17.100327 containerd[1589]: time="2025-12-16T12:59:17.099159042Z" level=info msg="connecting to shim 42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8" address="unix:///run/containerd/s/81fa49828d37510b8645e0b3cb5cae2afbaaa31b85745d3013ca463d988e69ff" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:17.132745 systemd[1]: Started cri-containerd-42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8.scope - libcontainer container 42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8. Dec 16 12:59:17.148396 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:59:17.180423 containerd[1589]: time="2025-12-16T12:59:17.180176747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766db845cb-wm7wl,Uid:5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"42122dfedb6f13450a8a0cd0c7fc8cc97c6d67035e7d4c1a0495ccd92de0bfa8\"" Dec 16 12:59:17.181660 containerd[1589]: time="2025-12-16T12:59:17.181614716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:17.462675 systemd-networkd[1477]: calia52e264cd1f: Gained IPv6LL Dec 16 12:59:17.472724 kubelet[2733]: E1216 12:59:17.472689 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-44tcf" podUID="4557e1c6-8a81-439a-bd50-8cd81381c4a7" Dec 16 12:59:17.473205 kubelet[2733]: E1216 12:59:17.473157 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766db845cb-vmh89" podUID="14fa36c4-6467-4e58-88b1-8675a1ddf3eb" Dec 16 12:59:17.531265 containerd[1589]: time="2025-12-16T12:59:17.531221376Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:17.532377 containerd[1589]: time="2025-12-16T12:59:17.532331680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:17.532465 containerd[1589]: time="2025-12-16T12:59:17.532394467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:59:17.532664 kubelet[2733]: E1216 12:59:17.532617 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:17.532727 kubelet[2733]: E1216 12:59:17.532670 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:17.532853 kubelet[2733]: E1216 12:59:17.532815 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92c2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766db845cb-wm7wl_calico-apiserver(5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:17.534123 kubelet[2733]: E1216 12:59:17.534076 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766db845cb-wm7wl" podUID="5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9" Dec 16 12:59:18.038852 systemd-networkd[1477]: cali4d107c5dd5a: Gained IPv6LL Dec 16 12:59:18.474312 kubelet[2733]: E1216 12:59:18.474094 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766db845cb-wm7wl" podUID="5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9" Dec 16 12:59:18.559582 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:46678.service - OpenSSH per-connection server daemon (10.0.0.1:46678). Dec 16 12:59:18.626235 sshd[4822]: Accepted publickey for core from 10.0.0.1 port 46678 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:18.629290 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:18.634732 systemd-logind[1561]: New session 9 of user core. Dec 16 12:59:18.639646 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:59:18.779271 sshd[4827]: Connection closed by 10.0.0.1 port 46678 Dec 16 12:59:18.779615 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:18.785964 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:46678.service: Deactivated successfully. Dec 16 12:59:18.788391 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:59:18.789570 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:59:18.790927 systemd-logind[1561]: Removed session 9. Dec 16 12:59:19.062819 systemd-networkd[1477]: calif0a00b596ee: Gained IPv6LL Dec 16 12:59:23.793684 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:42188.service - OpenSSH per-connection server daemon (10.0.0.1:42188). Dec 16 12:59:23.853696 sshd[4852]: Accepted publickey for core from 10.0.0.1 port 42188 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:23.855375 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:23.860371 systemd-logind[1561]: New session 10 of user core. Dec 16 12:59:23.879791 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:59:24.017540 sshd[4855]: Connection closed by 10.0.0.1 port 42188 Dec 16 12:59:24.018093 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:24.029181 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:42188.service: Deactivated successfully. Dec 16 12:59:24.031749 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:59:24.032779 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:59:24.036690 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:42200.service - OpenSSH per-connection server daemon (10.0.0.1:42200). Dec 16 12:59:24.037568 systemd-logind[1561]: Removed session 10. Dec 16 12:59:24.102323 sshd[4869]: Accepted publickey for core from 10.0.0.1 port 42200 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:24.104097 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:24.109590 systemd-logind[1561]: New session 11 of user core. Dec 16 12:59:24.116889 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:59:24.274111 sshd[4872]: Connection closed by 10.0.0.1 port 42200 Dec 16 12:59:24.274961 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:24.286127 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:42200.service: Deactivated successfully. Dec 16 12:59:24.290282 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:59:24.292579 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:59:24.299615 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:42212.service - OpenSSH per-connection server daemon (10.0.0.1:42212). Dec 16 12:59:24.304437 systemd-logind[1561]: Removed session 11. Dec 16 12:59:24.362473 sshd[4884]: Accepted publickey for core from 10.0.0.1 port 42212 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:24.364954 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:24.371582 systemd-logind[1561]: New session 12 of user core. Dec 16 12:59:24.381725 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:59:24.506334 sshd[4887]: Connection closed by 10.0.0.1 port 42212 Dec 16 12:59:24.506732 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:24.512629 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:42212.service: Deactivated successfully. Dec 16 12:59:24.515359 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:59:24.517243 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:59:24.518617 systemd-logind[1561]: Removed session 12. Dec 16 12:59:24.927678 containerd[1589]: time="2025-12-16T12:59:24.927639402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:59:25.286615 containerd[1589]: time="2025-12-16T12:59:25.286461624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:25.287822 containerd[1589]: time="2025-12-16T12:59:25.287761222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:59:25.287934 containerd[1589]: time="2025-12-16T12:59:25.287821996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:59:25.288000 kubelet[2733]: E1216 12:59:25.287953 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:25.288729 kubelet[2733]: E1216 12:59:25.288007 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:25.288729 kubelet[2733]: E1216 12:59:25.288215 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:10f79adb9b2f48408b83dddbfc7d047e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-69gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9c6786f75-2gcf6_calico-system(65a51f87-62b9-4a77-b80b-0e907e9d663e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:25.288928 containerd[1589]: time="2025-12-16T12:59:25.288346901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:59:25.606142 containerd[1589]: time="2025-12-16T12:59:25.605982164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:25.607963 containerd[1589]: time="2025-12-16T12:59:25.607919970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:59:25.608057 containerd[1589]: time="2025-12-16T12:59:25.607979000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:59:25.608244 kubelet[2733]: E1216 12:59:25.608192 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:25.608310 kubelet[2733]: E1216 12:59:25.608254 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:25.608652 containerd[1589]: time="2025-12-16T12:59:25.608587753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:59:25.608792 kubelet[2733]: E1216 12:59:25.608578 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlz62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv88p_calico-system(326be917-499c-401a-aae1-40840ab247ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:25.971151 containerd[1589]: time="2025-12-16T12:59:25.971087608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:26.107277 containerd[1589]: time="2025-12-16T12:59:26.107200319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:59:26.107455 containerd[1589]: time="2025-12-16T12:59:26.107240474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:59:26.107576 kubelet[2733]: E1216 12:59:26.107514 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:26.107635 kubelet[2733]: E1216 12:59:26.107580 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:26.107972 containerd[1589]: time="2025-12-16T12:59:26.107891376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:59:26.108025 kubelet[2733]: E1216 12:59:26.107871 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9c6786f75-2gcf6_calico-system(65a51f87-62b9-4a77-b80b-0e907e9d663e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:26.109151 kubelet[2733]: E1216 12:59:26.109111 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c6786f75-2gcf6" podUID="65a51f87-62b9-4a77-b80b-0e907e9d663e" Dec 16 12:59:26.517648 containerd[1589]: time="2025-12-16T12:59:26.517555350Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:26.520411 containerd[1589]: time="2025-12-16T12:59:26.518909039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:59:26.520411 containerd[1589]: time="2025-12-16T12:59:26.518942832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:59:26.520773 kubelet[2733]: E1216 12:59:26.519557 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:26.521174 kubelet[2733]: E1216 12:59:26.520812 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:26.521174 kubelet[2733]: E1216 12:59:26.520969 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlz62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv88p_calico-system(326be917-499c-401a-aae1-40840ab247ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:26.522175 kubelet[2733]: E1216 12:59:26.522104 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:59:26.927704 containerd[1589]: time="2025-12-16T12:59:26.927653728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:59:27.272554 containerd[1589]: time="2025-12-16T12:59:27.271001559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:27.275050 containerd[1589]: time="2025-12-16T12:59:27.274780178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:59:27.275050 containerd[1589]: time="2025-12-16T12:59:27.274913878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:59:27.275202 kubelet[2733]: E1216 12:59:27.275093 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:27.275202 kubelet[2733]: E1216 12:59:27.275152 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:27.275631 kubelet[2733]: E1216 12:59:27.275302 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b25q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8b9b958b-znds7_calico-system(63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:27.276554 kubelet[2733]: E1216 12:59:27.276519 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8b9b958b-znds7" podUID="63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53" Dec 16 12:59:28.928857 containerd[1589]: time="2025-12-16T12:59:28.928789947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:29.241651 containerd[1589]: time="2025-12-16T12:59:29.241513537Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:29.336481 containerd[1589]: time="2025-12-16T12:59:29.336413889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:59:29.336683 containerd[1589]: time="2025-12-16T12:59:29.336472409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:29.336835 kubelet[2733]: E1216 12:59:29.336784 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:29.339160 kubelet[2733]: E1216 12:59:29.336845 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:29.339160 kubelet[2733]: E1216 12:59:29.337035 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmr7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766db845cb-vmh89_calico-apiserver(14fa36c4-6467-4e58-88b1-8675a1ddf3eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:29.339160 kubelet[2733]: E1216 12:59:29.338327 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766db845cb-vmh89" podUID="14fa36c4-6467-4e58-88b1-8675a1ddf3eb" Dec 16 12:59:29.520892 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:42220.service - OpenSSH per-connection server daemon (10.0.0.1:42220). Dec 16 12:59:29.573404 sshd[4902]: Accepted publickey for core from 10.0.0.1 port 42220 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:29.574909 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:29.579560 systemd-logind[1561]: New session 13 of user core. Dec 16 12:59:29.590790 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:59:29.843587 sshd[4905]: Connection closed by 10.0.0.1 port 42220 Dec 16 12:59:29.843960 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:29.848403 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:42220.service: Deactivated successfully. Dec 16 12:59:29.851256 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:59:29.853331 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:59:29.855115 systemd-logind[1561]: Removed session 13. Dec 16 12:59:32.928361 containerd[1589]: time="2025-12-16T12:59:32.928197588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:33.432367 containerd[1589]: time="2025-12-16T12:59:33.432294339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:33.433979 containerd[1589]: time="2025-12-16T12:59:33.433889832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:33.434066 containerd[1589]: time="2025-12-16T12:59:33.433965915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:59:33.434187 kubelet[2733]: E1216 12:59:33.434128 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:33.434619 kubelet[2733]: E1216 12:59:33.434190 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:33.434619 kubelet[2733]: E1216 12:59:33.434457 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92c2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766db845cb-wm7wl_calico-apiserver(5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:33.434868 containerd[1589]: time="2025-12-16T12:59:33.434839143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:59:33.436138 kubelet[2733]: E1216 12:59:33.436058 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766db845cb-wm7wl" podUID="5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9" Dec 16 12:59:33.779220 containerd[1589]: time="2025-12-16T12:59:33.779023595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:33.903836 containerd[1589]: time="2025-12-16T12:59:33.903773176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:59:33.903836 containerd[1589]: time="2025-12-16T12:59:33.903811298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:59:33.904176 kubelet[2733]: E1216 12:59:33.904065 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:33.904176 kubelet[2733]: E1216 12:59:33.904129 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:33.904381 kubelet[2733]: E1216 12:59:33.904323 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvnhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-44tcf_calico-system(4557e1c6-8a81-439a-bd50-8cd81381c4a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:33.905562 kubelet[2733]: E1216 12:59:33.905532 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-44tcf" podUID="4557e1c6-8a81-439a-bd50-8cd81381c4a7" Dec 16 12:59:34.858288 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:59886.service - OpenSSH per-connection server daemon (10.0.0.1:59886). Dec 16 12:59:34.908845 sshd[4930]: Accepted publickey for core from 10.0.0.1 port 59886 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:34.910668 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:34.917894 systemd-logind[1561]: New session 14 of user core. Dec 16 12:59:34.927787 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:59:35.147614 sshd[4933]: Connection closed by 10.0.0.1 port 59886 Dec 16 12:59:35.148030 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:35.154783 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:59886.service: Deactivated successfully. Dec 16 12:59:35.156790 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:59:35.157603 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:59:35.158891 systemd-logind[1561]: Removed session 14. Dec 16 12:59:39.929148 kubelet[2733]: E1216 12:59:39.929072 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766db845cb-vmh89" podUID="14fa36c4-6467-4e58-88b1-8675a1ddf3eb" Dec 16 12:59:39.930323 kubelet[2733]: E1216 12:59:39.930247 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:59:40.169534 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:60634.service - OpenSSH per-connection server daemon (10.0.0.1:60634). Dec 16 12:59:40.276600 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 60634 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:40.277833 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:40.285978 systemd-logind[1561]: New session 15 of user core. Dec 16 12:59:40.295707 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:59:40.472859 sshd[5004]: Connection closed by 10.0.0.1 port 60634 Dec 16 12:59:40.472601 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:40.479547 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:60634.service: Deactivated successfully. Dec 16 12:59:40.482115 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:59:40.483062 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:59:40.485214 systemd-logind[1561]: Removed session 15. Dec 16 12:59:40.927580 kubelet[2733]: E1216 12:59:40.927530 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8b9b958b-znds7" podUID="63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53" Dec 16 12:59:40.928187 kubelet[2733]: E1216 12:59:40.928148 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c6786f75-2gcf6" podUID="65a51f87-62b9-4a77-b80b-0e907e9d663e" Dec 16 12:59:44.927299 kubelet[2733]: E1216 12:59:44.927241 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-44tcf" podUID="4557e1c6-8a81-439a-bd50-8cd81381c4a7" Dec 16 12:59:45.487977 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:60640.service - OpenSSH per-connection server daemon (10.0.0.1:60640). Dec 16 12:59:45.543532 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 60640 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:45.546224 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:45.550863 systemd-logind[1561]: New session 16 of user core. Dec 16 12:59:45.561736 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:59:45.676676 sshd[5022]: Connection closed by 10.0.0.1 port 60640 Dec 16 12:59:45.677037 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:45.690623 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:60640.service: Deactivated successfully. Dec 16 12:59:45.692541 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:59:45.693488 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:59:45.696457 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:60656.service - OpenSSH per-connection server daemon (10.0.0.1:60656). Dec 16 12:59:45.697265 systemd-logind[1561]: Removed session 16. Dec 16 12:59:45.753985 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 60656 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:45.756189 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:45.763058 systemd-logind[1561]: New session 17 of user core. Dec 16 12:59:45.776806 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:59:45.928445 kubelet[2733]: E1216 12:59:45.928389 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766db845cb-wm7wl" podUID="5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9" Dec 16 12:59:46.053329 sshd[5038]: Connection closed by 10.0.0.1 port 60656 Dec 16 12:59:46.054806 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:46.064343 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:60656.service: Deactivated successfully. Dec 16 12:59:46.066579 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:59:46.067745 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:59:46.070252 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:60658.service - OpenSSH per-connection server daemon (10.0.0.1:60658). Dec 16 12:59:46.071010 systemd-logind[1561]: Removed session 17. Dec 16 12:59:46.148899 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 60658 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:46.151057 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:46.160225 systemd-logind[1561]: New session 18 of user core. Dec 16 12:59:46.171803 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:59:47.225481 sshd[5053]: Connection closed by 10.0.0.1 port 60658 Dec 16 12:59:47.226722 sshd-session[5050]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:47.242011 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:60658.service: Deactivated successfully. Dec 16 12:59:47.245185 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:59:47.247053 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:59:47.252159 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:60666.service - OpenSSH per-connection server daemon (10.0.0.1:60666). Dec 16 12:59:47.254134 systemd-logind[1561]: Removed session 18. Dec 16 12:59:47.311556 sshd[5074]: Accepted publickey for core from 10.0.0.1 port 60666 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:47.313427 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:47.318743 systemd-logind[1561]: New session 19 of user core. Dec 16 12:59:47.336702 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:59:47.779177 sshd[5077]: Connection closed by 10.0.0.1 port 60666 Dec 16 12:59:47.779736 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:47.794461 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:60666.service: Deactivated successfully. Dec 16 12:59:47.797026 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:59:47.799624 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:59:47.805912 systemd-logind[1561]: Removed session 19. Dec 16 12:59:47.808806 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:60668.service - OpenSSH per-connection server daemon (10.0.0.1:60668). Dec 16 12:59:47.864203 sshd[5088]: Accepted publickey for core from 10.0.0.1 port 60668 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:47.866637 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:47.872419 systemd-logind[1561]: New session 20 of user core. Dec 16 12:59:47.881773 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:59:48.008203 sshd[5091]: Connection closed by 10.0.0.1 port 60668 Dec 16 12:59:48.008608 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:48.013412 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:60668.service: Deactivated successfully. Dec 16 12:59:48.015768 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:59:48.017000 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:59:48.018633 systemd-logind[1561]: Removed session 20. Dec 16 12:59:52.927952 containerd[1589]: time="2025-12-16T12:59:52.927893950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:59:53.029164 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:50976.service - OpenSSH per-connection server daemon (10.0.0.1:50976). Dec 16 12:59:53.129923 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 50976 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:53.132524 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:53.137799 systemd-logind[1561]: New session 21 of user core. Dec 16 12:59:53.144647 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:59:53.276382 sshd[5113]: Connection closed by 10.0.0.1 port 50976 Dec 16 12:59:53.276690 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:53.281272 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:50976.service: Deactivated successfully. Dec 16 12:59:53.283417 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:59:53.284342 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:59:53.286116 systemd-logind[1561]: Removed session 21. Dec 16 12:59:53.324568 containerd[1589]: time="2025-12-16T12:59:53.324519933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:53.333965 containerd[1589]: time="2025-12-16T12:59:53.333880264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:59:53.334105 containerd[1589]: time="2025-12-16T12:59:53.333995374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:59:53.334213 kubelet[2733]: E1216 12:59:53.334155 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:53.334609 kubelet[2733]: E1216 12:59:53.334232 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:53.334609 kubelet[2733]: E1216 12:59:53.334488 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlz62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv88p_calico-system(326be917-499c-401a-aae1-40840ab247ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:53.334737 containerd[1589]: time="2025-12-16T12:59:53.334617603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:53.659012 containerd[1589]: time="2025-12-16T12:59:53.658936556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:53.746937 containerd[1589]: time="2025-12-16T12:59:53.746831270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:59:53.746937 containerd[1589]: time="2025-12-16T12:59:53.746894390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:53.747427 kubelet[2733]: E1216 12:59:53.747302 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:53.747427 kubelet[2733]: E1216 12:59:53.747373 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:53.748007 containerd[1589]: time="2025-12-16T12:59:53.747936021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:59:53.748805 kubelet[2733]: E1216 12:59:53.748212 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmr7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766db845cb-vmh89_calico-apiserver(14fa36c4-6467-4e58-88b1-8675a1ddf3eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:53.749394 kubelet[2733]: E1216 12:59:53.749363 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766db845cb-vmh89" podUID="14fa36c4-6467-4e58-88b1-8675a1ddf3eb" Dec 16 12:59:54.126121 containerd[1589]: time="2025-12-16T12:59:54.126076518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:54.127233 containerd[1589]: time="2025-12-16T12:59:54.127181409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:59:54.127408 containerd[1589]: time="2025-12-16T12:59:54.127248747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:59:54.127484 kubelet[2733]: E1216 12:59:54.127435 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:54.127544 kubelet[2733]: E1216 12:59:54.127506 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:54.127841 containerd[1589]: time="2025-12-16T12:59:54.127814989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:59:54.127877 kubelet[2733]: E1216 12:59:54.127774 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlz62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jv88p_calico-system(326be917-499c-401a-aae1-40840ab247ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:54.129262 kubelet[2733]: E1216 12:59:54.129216 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef" Dec 16 12:59:54.487103 containerd[1589]: time="2025-12-16T12:59:54.486928992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:54.488235 containerd[1589]: time="2025-12-16T12:59:54.488184141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:59:54.488322 containerd[1589]: time="2025-12-16T12:59:54.488240568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:59:54.488518 kubelet[2733]: E1216 12:59:54.488446 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:54.488907 kubelet[2733]: E1216 12:59:54.488527 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:54.488907 kubelet[2733]: E1216 12:59:54.488649 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:10f79adb9b2f48408b83dddbfc7d047e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-69gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9c6786f75-2gcf6_calico-system(65a51f87-62b9-4a77-b80b-0e907e9d663e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:54.490768 containerd[1589]: time="2025-12-16T12:59:54.490731857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:59:54.823232 containerd[1589]: time="2025-12-16T12:59:54.823075880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:54.825646 containerd[1589]: time="2025-12-16T12:59:54.825591977Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:59:54.825646 containerd[1589]: time="2025-12-16T12:59:54.825621363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:59:54.825875 kubelet[2733]: E1216 12:59:54.825825 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:54.825960 kubelet[2733]: E1216 12:59:54.825887 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:54.826066 kubelet[2733]: E1216 12:59:54.826022 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9c6786f75-2gcf6_calico-system(65a51f87-62b9-4a77-b80b-0e907e9d663e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:54.827255 kubelet[2733]: E1216 12:59:54.827206 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c6786f75-2gcf6" podUID="65a51f87-62b9-4a77-b80b-0e907e9d663e" Dec 16 12:59:55.928276 containerd[1589]: time="2025-12-16T12:59:55.928212643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:59:56.257516 containerd[1589]: time="2025-12-16T12:59:56.257344430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:56.258707 containerd[1589]: time="2025-12-16T12:59:56.258630033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:59:56.258869 containerd[1589]: time="2025-12-16T12:59:56.258736737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:59:56.259123 kubelet[2733]: E1216 12:59:56.259043 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:56.259123 kubelet[2733]: E1216 12:59:56.259113 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:56.259582 kubelet[2733]: E1216 12:59:56.259278 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b25q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8b9b958b-znds7_calico-system(63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:56.260876 kubelet[2733]: E1216 12:59:56.260831 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8b9b958b-znds7" podUID="63bfc57c-01f9-4eda-a8ed-3f2fe04dfe53" Dec 16 12:59:56.928595 containerd[1589]: time="2025-12-16T12:59:56.928525359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:59:57.278973 containerd[1589]: time="2025-12-16T12:59:57.278842382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:57.280040 containerd[1589]: time="2025-12-16T12:59:57.279992055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:59:57.280130 containerd[1589]: time="2025-12-16T12:59:57.280065245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:59:57.280262 kubelet[2733]: E1216 12:59:57.280215 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:57.280624 kubelet[2733]: E1216 12:59:57.280271 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:57.280624 kubelet[2733]: E1216 12:59:57.280420 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvnhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-44tcf_calico-system(4557e1c6-8a81-439a-bd50-8cd81381c4a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:57.281702 kubelet[2733]: E1216 12:59:57.281638 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-44tcf" podUID="4557e1c6-8a81-439a-bd50-8cd81381c4a7" Dec 16 12:59:58.295549 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:50982.service - OpenSSH per-connection server daemon (10.0.0.1:50982). Dec 16 12:59:58.350860 sshd[5131]: Accepted publickey for core from 10.0.0.1 port 50982 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 12:59:58.352644 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:58.357483 systemd-logind[1561]: New session 22 of user core. Dec 16 12:59:58.372731 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:59:58.496753 sshd[5134]: Connection closed by 10.0.0.1 port 50982 Dec 16 12:59:58.497342 sshd-session[5131]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:58.501552 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:50982.service: Deactivated successfully. Dec 16 12:59:58.504738 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:59:58.507734 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:59:58.510035 systemd-logind[1561]: Removed session 22. Dec 16 12:59:59.930151 containerd[1589]: time="2025-12-16T12:59:59.930060257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:00:00.267249 containerd[1589]: time="2025-12-16T13:00:00.267069012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:00.268463 containerd[1589]: time="2025-12-16T13:00:00.268388135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:00:00.268463 containerd[1589]: time="2025-12-16T13:00:00.268429514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:00:00.268730 kubelet[2733]: E1216 13:00:00.268652 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:00.269164 kubelet[2733]: E1216 13:00:00.268737 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:00.269164 kubelet[2733]: E1216 13:00:00.268970 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92c2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766db845cb-wm7wl_calico-apiserver(5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:00.270248 kubelet[2733]: E1216 13:00:00.270198 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766db845cb-wm7wl" podUID="5bb3cad0-1f8c-4e17-94fc-cb2abb08a3e9" Dec 16 13:00:03.520364 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:33508.service - OpenSSH per-connection server daemon (10.0.0.1:33508). Dec 16 13:00:03.574869 sshd[5147]: Accepted publickey for core from 10.0.0.1 port 33508 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:00:03.576342 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:00:03.580308 systemd-logind[1561]: New session 23 of user core. Dec 16 13:00:03.586620 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:00:03.707353 sshd[5150]: Connection closed by 10.0.0.1 port 33508 Dec 16 13:00:03.707871 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Dec 16 13:00:03.711359 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:33508.service: Deactivated successfully. Dec 16 13:00:03.713487 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:00:03.714927 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:00:03.716464 systemd-logind[1561]: Removed session 23. Dec 16 13:00:04.928803 kubelet[2733]: E1216 13:00:04.928740 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jv88p" podUID="326be917-499c-401a-aae1-40840ab247ef"