Dec 12 18:46:23.578316 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:46:23.578362 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:46:23.578374 kernel: BIOS-provided physical RAM map: Dec 12 18:46:23.578383 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 12 18:46:23.578392 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 12 18:46:23.578404 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Dec 12 18:46:23.578415 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 12 18:46:23.578425 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Dec 12 18:46:23.578434 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 12 18:46:23.578443 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 12 18:46:23.578457 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 12 18:46:23.578466 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 12 18:46:23.578476 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 12 18:46:23.578485 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 12 18:46:23.578500 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 12 18:46:23.578511 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 12 18:46:23.578521 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 12 18:46:23.578530 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:46:23.578540 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:46:23.578553 kernel: NX (Execute Disable) protection: active Dec 12 18:46:23.578564 kernel: APIC: Static calls initialized Dec 12 18:46:23.578577 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Dec 12 18:46:23.578587 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Dec 12 18:46:23.578597 kernel: extended physical RAM map: Dec 12 18:46:23.578607 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 12 18:46:23.578618 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 12 18:46:23.578628 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Dec 12 18:46:23.578638 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 12 18:46:23.578648 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Dec 12 18:46:23.578658 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Dec 12 18:46:23.578671 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Dec 12 18:46:23.578681 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Dec 12 18:46:23.578691 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Dec 12 18:46:23.578701 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 12 18:46:23.578712 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 12 18:46:23.578722 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 12 18:46:23.578732 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 12 18:46:23.578742 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 12 18:46:23.578752 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 12 18:46:23.578762 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 12 18:46:23.578780 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 12 18:46:23.578790 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 12 18:46:23.578801 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:46:23.578811 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:46:23.578821 kernel: efi: EFI v2.7 by EDK II Dec 12 18:46:23.578832 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Dec 12 18:46:23.578846 kernel: random: crng init done Dec 12 18:46:23.578856 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 12 18:46:23.578867 kernel: secureboot: Secure boot enabled Dec 12 18:46:23.578878 kernel: SMBIOS 2.8 present. Dec 12 18:46:23.578888 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 12 18:46:23.578899 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:46:23.578909 kernel: Hypervisor detected: KVM Dec 12 18:46:23.578919 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 12 18:46:23.578929 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:46:23.578939 kernel: kvm-clock: using sched offset of 6372871763 cycles Dec 12 18:46:23.578950 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:46:23.578964 kernel: tsc: Detected 2794.750 MHz processor Dec 12 18:46:23.578975 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:46:23.578985 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:46:23.578996 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 12 18:46:23.579006 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 12 18:46:23.579017 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:46:23.579027 kernel: Using GB pages for direct mapping Dec 12 18:46:23.579038 kernel: ACPI: Early table checksum verification disabled Dec 12 18:46:23.579048 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Dec 12 18:46:23.579059 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 12 18:46:23.579073 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:46:23.579083 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:46:23.579094 kernel: ACPI: FACS 0x000000009BBDD000 000040 Dec 12 18:46:23.579104 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:46:23.579115 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:46:23.579125 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:46:23.579135 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:46:23.579146 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 12 18:46:23.579159 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Dec 12 18:46:23.579170 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Dec 12 18:46:23.579180 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Dec 12 18:46:23.579191 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Dec 12 18:46:23.579213 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Dec 12 18:46:23.579223 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Dec 12 18:46:23.579233 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Dec 12 18:46:23.579244 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Dec 12 18:46:23.579254 kernel: No NUMA configuration found Dec 12 18:46:23.579268 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Dec 12 18:46:23.579278 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Dec 12 18:46:23.579289 kernel: Zone ranges: Dec 12 18:46:23.579300 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:46:23.579310 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Dec 12 18:46:23.579320 kernel: Normal empty Dec 12 18:46:23.579347 kernel: Device empty Dec 12 18:46:23.579358 kernel: Movable zone start for each node Dec 12 18:46:23.579368 kernel: Early memory node ranges Dec 12 18:46:23.579378 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Dec 12 18:46:23.579391 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Dec 12 18:46:23.579402 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Dec 12 18:46:23.579412 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Dec 12 18:46:23.579423 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Dec 12 18:46:23.579433 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Dec 12 18:46:23.579443 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:46:23.579453 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Dec 12 18:46:23.579464 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:46:23.579475 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 12 18:46:23.579489 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 12 18:46:23.579499 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Dec 12 18:46:23.579510 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:46:23.579521 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:46:23.579531 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:46:23.579542 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:46:23.579553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:46:23.579563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:46:23.579574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:46:23.579588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:46:23.579598 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:46:23.579609 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:46:23.579619 kernel: TSC deadline timer available Dec 12 18:46:23.579630 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:46:23.579640 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:46:23.579660 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:46:23.579673 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:46:23.579685 kernel: CPU topo: Num. cores per package: 4 Dec 12 18:46:23.579696 kernel: CPU topo: Num. threads per package: 4 Dec 12 18:46:23.579707 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 12 18:46:23.579719 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:46:23.579732 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:46:23.579744 kernel: kvm-guest: setup PV sched yield Dec 12 18:46:23.579754 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 12 18:46:23.579766 kernel: Booting paravirtualized kernel on KVM Dec 12 18:46:23.579777 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:46:23.579792 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 12 18:46:23.579803 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 12 18:46:23.579815 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 12 18:46:23.579825 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 12 18:46:23.579837 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:46:23.579848 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:46:23.579861 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:46:23.579873 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:46:23.579888 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:46:23.579900 kernel: Fallback order for Node 0: 0 Dec 12 18:46:23.579911 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Dec 12 18:46:23.579922 kernel: Policy zone: DMA32 Dec 12 18:46:23.579934 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:46:23.579945 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 18:46:23.579956 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:46:23.579968 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:46:23.579979 kernel: Dynamic Preempt: voluntary Dec 12 18:46:23.579994 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:46:23.580010 kernel: rcu: RCU event tracing is enabled. Dec 12 18:46:23.580022 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 18:46:23.580033 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:46:23.580044 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:46:23.580055 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:46:23.580066 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:46:23.580078 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 18:46:23.580089 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:46:23.580103 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:46:23.580114 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:46:23.580125 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 12 18:46:23.580136 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:46:23.580147 kernel: Console: colour dummy device 80x25 Dec 12 18:46:23.580158 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:46:23.580169 kernel: ACPI: Core revision 20240827 Dec 12 18:46:23.580180 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:46:23.580191 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:46:23.580223 kernel: x2apic enabled Dec 12 18:46:23.580234 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:46:23.580245 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:46:23.580257 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:46:23.580268 kernel: kvm-guest: setup PV IPIs Dec 12 18:46:23.580279 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:46:23.580291 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 12 18:46:23.580302 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Dec 12 18:46:23.580314 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:46:23.580345 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:46:23.580356 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:46:23.580369 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:46:23.580381 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:46:23.580394 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:46:23.580405 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 12 18:46:23.580416 kernel: active return thunk: retbleed_return_thunk Dec 12 18:46:23.580427 kernel: RETBleed: Mitigation: untrained return thunk Dec 12 18:46:23.580438 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:46:23.580453 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:46:23.580463 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:46:23.580475 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:46:23.580486 kernel: active return thunk: srso_return_thunk Dec 12 18:46:23.580497 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:46:23.580508 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:46:23.580520 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:46:23.580531 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:46:23.580546 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:46:23.580557 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 12 18:46:23.580569 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:46:23.580579 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:46:23.580589 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:46:23.580600 kernel: landlock: Up and running. Dec 12 18:46:23.580611 kernel: SELinux: Initializing. Dec 12 18:46:23.580623 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:46:23.580634 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:46:23.580649 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 12 18:46:23.580660 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:46:23.580671 kernel: ... version: 0 Dec 12 18:46:23.580682 kernel: ... bit width: 48 Dec 12 18:46:23.580694 kernel: ... generic registers: 6 Dec 12 18:46:23.580705 kernel: ... value mask: 0000ffffffffffff Dec 12 18:46:23.580717 kernel: ... max period: 00007fffffffffff Dec 12 18:46:23.580728 kernel: ... fixed-purpose events: 0 Dec 12 18:46:23.580739 kernel: ... event mask: 000000000000003f Dec 12 18:46:23.580754 kernel: signal: max sigframe size: 1776 Dec 12 18:46:23.580766 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:46:23.580778 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:46:23.580790 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:46:23.580802 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:46:23.580813 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:46:23.580825 kernel: .... node #0, CPUs: #1 #2 #3 Dec 12 18:46:23.580836 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 18:46:23.580848 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Dec 12 18:46:23.580863 kernel: Memory: 2401024K/2552216K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145256K reserved, 0K cma-reserved) Dec 12 18:46:23.580874 kernel: devtmpfs: initialized Dec 12 18:46:23.580886 kernel: x86/mm: Memory block size: 128MB Dec 12 18:46:23.580897 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Dec 12 18:46:23.580909 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Dec 12 18:46:23.580920 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:46:23.580932 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 18:46:23.580943 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:46:23.580955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:46:23.580969 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:46:23.580981 kernel: audit: type=2000 audit(1765565178.547:1): state=initialized audit_enabled=0 res=1 Dec 12 18:46:23.580992 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:46:23.581003 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:46:23.581015 kernel: cpuidle: using governor menu Dec 12 18:46:23.581026 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:46:23.581037 kernel: dca service started, version 1.12.1 Dec 12 18:46:23.581049 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 12 18:46:23.581061 kernel: PCI: Using configuration type 1 for base access Dec 12 18:46:23.581076 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:46:23.581088 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:46:23.581099 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:46:23.581111 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:46:23.581122 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:46:23.581133 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:46:23.581145 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:46:23.581156 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:46:23.581168 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:46:23.581183 kernel: ACPI: Interpreter enabled Dec 12 18:46:23.581208 kernel: ACPI: PM: (supports S0 S5) Dec 12 18:46:23.581220 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:46:23.581232 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:46:23.581243 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:46:23.581255 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:46:23.581267 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:46:23.581555 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:46:23.581728 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:46:23.581886 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:46:23.581903 kernel: PCI host bridge to bus 0000:00 Dec 12 18:46:23.582065 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:46:23.582223 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:46:23.582417 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:46:23.582569 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 12 18:46:23.582713 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 12 18:46:23.582857 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 12 18:46:23.583009 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:46:23.583224 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:46:23.583438 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:46:23.583600 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 12 18:46:23.583774 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 12 18:46:23.583939 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 12 18:46:23.584101 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:46:23.584306 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 18:46:23.584526 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 12 18:46:23.584690 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 12 18:46:23.584848 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 12 18:46:23.585044 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:46:23.585210 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 12 18:46:23.585404 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 12 18:46:23.585569 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 12 18:46:23.585746 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:46:23.585907 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 12 18:46:23.586063 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 12 18:46:23.586238 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 12 18:46:23.586453 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 12 18:46:23.586627 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:46:23.586787 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:46:23.586955 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:46:23.587116 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 12 18:46:23.587283 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 12 18:46:23.587475 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:46:23.587636 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 12 18:46:23.587655 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:46:23.587667 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:46:23.587679 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:46:23.587691 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:46:23.587702 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:46:23.587714 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:46:23.587730 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:46:23.587742 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:46:23.587754 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:46:23.587766 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:46:23.587777 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:46:23.587789 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:46:23.587801 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:46:23.587813 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:46:23.587828 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:46:23.587839 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:46:23.587851 kernel: iommu: Default domain type: Translated Dec 12 18:46:23.587862 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:46:23.587874 kernel: efivars: Registered efivars operations Dec 12 18:46:23.587886 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:46:23.587898 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:46:23.587910 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Dec 12 18:46:23.587921 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Dec 12 18:46:23.587932 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Dec 12 18:46:23.587947 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Dec 12 18:46:23.587959 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Dec 12 18:46:23.588120 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:46:23.588296 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:46:23.588476 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:46:23.588491 kernel: vgaarb: loaded Dec 12 18:46:23.588502 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:46:23.588512 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:46:23.588527 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:46:23.588538 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:46:23.588549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:46:23.588559 kernel: pnp: PnP ACPI init Dec 12 18:46:23.588728 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 12 18:46:23.588745 kernel: pnp: PnP ACPI: found 6 devices Dec 12 18:46:23.588756 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:46:23.588766 kernel: NET: Registered PF_INET protocol family Dec 12 18:46:23.588780 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:46:23.588791 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:46:23.588801 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:46:23.588812 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:46:23.588822 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:46:23.588832 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:46:23.588843 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:46:23.588854 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:46:23.588864 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:46:23.588879 kernel: NET: Registered PF_XDP protocol family Dec 12 18:46:23.589027 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 12 18:46:23.589166 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 12 18:46:23.589311 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:46:23.589464 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:46:23.589600 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:46:23.589741 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 12 18:46:23.589915 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 12 18:46:23.590086 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 12 18:46:23.590102 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:46:23.590113 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 12 18:46:23.590124 kernel: Initialise system trusted keyrings Dec 12 18:46:23.590135 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:46:23.590145 kernel: Key type asymmetric registered Dec 12 18:46:23.590167 kernel: Asymmetric key parser 'x509' registered Dec 12 18:46:23.590217 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:46:23.590234 kernel: io scheduler mq-deadline registered Dec 12 18:46:23.590245 kernel: io scheduler kyber registered Dec 12 18:46:23.590256 kernel: io scheduler bfq registered Dec 12 18:46:23.590267 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:46:23.590280 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:46:23.590292 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:46:23.590304 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 12 18:46:23.590315 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:46:23.590342 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:46:23.590354 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:46:23.590368 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:46:23.590380 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:46:23.590540 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 12 18:46:23.590558 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:46:23.590697 kernel: rtc_cmos 00:04: registered as rtc0 Dec 12 18:46:23.590839 kernel: rtc_cmos 00:04: setting system clock to 2025-12-12T18:46:22 UTC (1765565182) Dec 12 18:46:23.590986 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 12 18:46:23.591009 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:46:23.591021 kernel: efifb: probing for efifb Dec 12 18:46:23.591033 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 12 18:46:23.591044 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 12 18:46:23.591059 kernel: efifb: scrolling: redraw Dec 12 18:46:23.591072 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 12 18:46:23.591084 kernel: Console: switching to colour frame buffer device 160x50 Dec 12 18:46:23.591099 kernel: fb0: EFI VGA frame buffer device Dec 12 18:46:23.591110 kernel: pstore: Using crash dump compression: deflate Dec 12 18:46:23.591121 kernel: pstore: Registered efi_pstore as persistent store backend Dec 12 18:46:23.591132 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:46:23.591143 kernel: Segment Routing with IPv6 Dec 12 18:46:23.591157 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:46:23.591172 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:46:23.591183 kernel: Key type dns_resolver registered Dec 12 18:46:23.591208 kernel: IPI shorthand broadcast: enabled Dec 12 18:46:23.591219 kernel: sched_clock: Marking stable (4377007874, 455095013)->(4968598851, -136495964) Dec 12 18:46:23.591229 kernel: registered taskstats version 1 Dec 12 18:46:23.591240 kernel: Loading compiled-in X.509 certificates Dec 12 18:46:23.591252 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:46:23.591263 kernel: Demotion targets for Node 0: null Dec 12 18:46:23.591280 kernel: Key type .fscrypt registered Dec 12 18:46:23.591292 kernel: Key type fscrypt-provisioning registered Dec 12 18:46:23.591303 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:46:23.591318 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:46:23.591388 kernel: ima: No architecture policies found Dec 12 18:46:23.591401 kernel: clk: Disabling unused clocks Dec 12 18:46:23.591412 kernel: Warning: unable to open an initial console. Dec 12 18:46:23.591423 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:46:23.591434 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:46:23.591445 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:46:23.591456 kernel: Run /init as init process Dec 12 18:46:23.591470 kernel: with arguments: Dec 12 18:46:23.591490 kernel: /init Dec 12 18:46:23.591501 kernel: with environment: Dec 12 18:46:23.591512 kernel: HOME=/ Dec 12 18:46:23.591522 kernel: TERM=linux Dec 12 18:46:23.591535 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:46:23.591551 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:46:23.591564 systemd[1]: Detected virtualization kvm. Dec 12 18:46:23.591586 systemd[1]: Detected architecture x86-64. Dec 12 18:46:23.591598 systemd[1]: Running in initrd. Dec 12 18:46:23.591609 systemd[1]: No hostname configured, using default hostname. Dec 12 18:46:23.591622 systemd[1]: Hostname set to . Dec 12 18:46:23.591633 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:46:23.591645 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:46:23.591657 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:46:23.591670 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:46:23.591693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:46:23.591706 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:46:23.591718 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:46:23.591731 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:46:23.591744 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:46:23.591757 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:46:23.591769 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:46:23.591792 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:46:23.591804 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:46:23.591874 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:46:23.591891 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:46:23.591902 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:46:23.591913 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:46:23.591925 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:46:23.591938 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:46:23.591995 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:46:23.592008 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:46:23.592019 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:46:23.592031 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:46:23.592043 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:46:23.592055 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:46:23.592067 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:46:23.592082 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:46:23.592098 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:46:23.592114 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:46:23.592125 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:46:23.592137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:46:23.592148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:46:23.592160 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:46:23.592173 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:46:23.592207 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:46:23.592220 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:46:23.592266 systemd-journald[203]: Collecting audit messages is disabled. Dec 12 18:46:23.592314 systemd-journald[203]: Journal started Dec 12 18:46:23.592389 systemd-journald[203]: Runtime Journal (/run/log/journal/f7742793b6e548978681dc45582b52cc) is 5.9M, max 47.9M, 41.9M free. Dec 12 18:46:23.590267 systemd-modules-load[204]: Inserted module 'overlay' Dec 12 18:46:23.599647 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:46:23.600570 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:46:23.610755 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:46:23.631479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:46:23.643406 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:46:23.659683 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:46:23.664220 systemd-modules-load[204]: Inserted module 'br_netfilter' Dec 12 18:46:23.666223 kernel: Bridge firewalling registered Dec 12 18:46:23.676673 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:46:23.678303 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:46:23.683475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:46:23.705413 systemd-tmpfiles[223]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:46:23.707247 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:46:23.718250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:46:23.722220 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:46:23.728112 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:46:23.733142 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:46:23.756310 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:46:23.786778 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:46:23.839642 systemd-resolved[243]: Positive Trust Anchors: Dec 12 18:46:23.839664 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:46:23.839705 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:46:23.843160 systemd-resolved[243]: Defaulting to hostname 'linux'. Dec 12 18:46:23.844710 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:46:23.847171 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:46:23.948397 kernel: SCSI subsystem initialized Dec 12 18:46:23.959396 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:46:23.972399 kernel: iscsi: registered transport (tcp) Dec 12 18:46:24.003365 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:46:24.003434 kernel: QLogic iSCSI HBA Driver Dec 12 18:46:24.027677 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:46:24.049775 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:46:24.050397 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:46:24.122748 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:46:24.127002 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:46:24.189385 kernel: raid6: avx2x4 gen() 26882 MB/s Dec 12 18:46:24.206396 kernel: raid6: avx2x2 gen() 28683 MB/s Dec 12 18:46:24.224649 kernel: raid6: avx2x1 gen() 23857 MB/s Dec 12 18:46:24.224748 kernel: raid6: using algorithm avx2x2 gen() 28683 MB/s Dec 12 18:46:24.243419 kernel: raid6: .... xor() 15307 MB/s, rmw enabled Dec 12 18:46:24.243511 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:46:24.283388 kernel: xor: automatically using best checksumming function avx Dec 12 18:46:24.461388 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:46:24.472531 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:46:24.478889 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:46:24.514451 systemd-udevd[454]: Using default interface naming scheme 'v255'. Dec 12 18:46:24.520456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:46:24.521598 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:46:24.547426 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Dec 12 18:46:24.584229 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:46:24.588623 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:46:24.676989 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:46:24.682815 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:46:24.718367 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 12 18:46:24.724396 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 18:46:24.730376 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:46:24.730393 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:46:24.730407 kernel: GPT:9289727 != 19775487 Dec 12 18:46:24.734689 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:46:24.734823 kernel: GPT:9289727 != 19775487 Dec 12 18:46:24.734853 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:46:24.734878 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:46:24.757354 kernel: libata version 3.00 loaded. Dec 12 18:46:24.768366 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:46:24.770369 kernel: AES CTR mode by8 optimization enabled Dec 12 18:46:24.775610 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:46:24.775776 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:46:24.779055 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:46:24.787427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:46:24.795459 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:46:24.800406 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:46:24.799370 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:46:24.816359 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:46:24.816563 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:46:24.816731 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:46:24.820345 kernel: scsi host0: ahci Dec 12 18:46:24.823346 kernel: scsi host1: ahci Dec 12 18:46:24.827417 kernel: scsi host2: ahci Dec 12 18:46:24.829348 kernel: scsi host3: ahci Dec 12 18:46:24.830350 kernel: scsi host4: ahci Dec 12 18:46:24.832067 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 18:46:24.835378 kernel: scsi host5: ahci Dec 12 18:46:24.835741 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Dec 12 18:46:24.835758 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Dec 12 18:46:24.835771 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Dec 12 18:46:24.841245 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Dec 12 18:46:24.841337 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Dec 12 18:46:24.846000 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Dec 12 18:46:24.887123 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 18:46:24.903067 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:46:24.910751 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 18:46:24.910849 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 18:46:24.916228 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:46:24.921743 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:46:24.921807 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:46:24.932100 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:46:24.946285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:46:24.951116 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:46:24.956952 disk-uuid[616]: Primary Header is updated. Dec 12 18:46:24.956952 disk-uuid[616]: Secondary Entries is updated. Dec 12 18:46:24.956952 disk-uuid[616]: Secondary Header is updated. Dec 12 18:46:24.964374 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:46:24.969368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:46:24.999734 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:46:25.160181 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:46:25.160265 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:46:25.160281 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:46:25.161404 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:46:25.163501 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:46:25.164382 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 12 18:46:25.167032 kernel: ata3.00: LPM support broken, forcing max_power Dec 12 18:46:25.167066 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 12 18:46:25.168262 kernel: ata3.00: applying bridge limits Dec 12 18:46:25.169368 kernel: ata3.00: LPM support broken, forcing max_power Dec 12 18:46:25.171608 kernel: ata3.00: configured for UDMA/100 Dec 12 18:46:25.172367 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 12 18:46:25.219150 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 12 18:46:25.219509 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 12 18:46:25.243369 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 12 18:46:25.534601 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:46:25.535253 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:46:25.540261 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:46:25.544086 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:46:25.548926 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:46:25.573610 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:46:26.007371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:46:26.007679 disk-uuid[617]: The operation has completed successfully. Dec 12 18:46:26.039178 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:46:26.039343 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:46:26.088113 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:46:26.165825 sh[652]: Success Dec 12 18:46:26.185149 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:46:26.185198 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:46:26.186868 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:46:26.196349 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:46:26.227393 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:46:26.236666 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:46:26.255505 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:46:26.263944 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (664) Dec 12 18:46:26.263987 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:46:26.264013 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:46:26.270800 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:46:26.270834 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:46:26.272207 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:46:26.272826 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:46:26.275396 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:46:26.276211 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:46:26.303718 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:46:26.325358 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (697) Dec 12 18:46:26.325403 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:46:26.328873 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:46:26.333370 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:46:26.333394 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:46:26.347346 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:46:26.347855 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:46:26.352598 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:46:26.419221 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:46:26.429668 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:46:26.462245 ignition[762]: Ignition 2.22.0 Dec 12 18:46:26.462261 ignition[762]: Stage: fetch-offline Dec 12 18:46:26.462300 ignition[762]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:46:26.462312 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:46:26.462442 ignition[762]: parsed url from cmdline: "" Dec 12 18:46:26.462448 ignition[762]: no config URL provided Dec 12 18:46:26.462455 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:46:26.462467 ignition[762]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:46:26.462495 ignition[762]: op(1): [started] loading QEMU firmware config module Dec 12 18:46:26.462502 ignition[762]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 18:46:26.475296 ignition[762]: op(1): [finished] loading QEMU firmware config module Dec 12 18:46:26.485828 systemd-networkd[833]: lo: Link UP Dec 12 18:46:26.485842 systemd-networkd[833]: lo: Gained carrier Dec 12 18:46:26.487747 systemd-networkd[833]: Enumeration completed Dec 12 18:46:26.487861 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:46:26.489723 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:46:26.489729 systemd-networkd[833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:46:26.490346 systemd-networkd[833]: eth0: Link UP Dec 12 18:46:26.492994 systemd-networkd[833]: eth0: Gained carrier Dec 12 18:46:26.493006 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:46:26.493604 systemd[1]: Reached target network.target - Network. Dec 12 18:46:26.521394 systemd-networkd[833]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 18:46:26.588687 ignition[762]: parsing config with SHA512: 1cf86a6ad4a8b5ee9d7ef7da5812ce7cc9850f12a6898b6927578193c3be268fba20baceafe6814e8792850371afe6a87c2a048cb1aa7ab5956cde008787aa84 Dec 12 18:46:26.595368 unknown[762]: fetched base config from "system" Dec 12 18:46:26.595382 unknown[762]: fetched user config from "qemu" Dec 12 18:46:26.595710 ignition[762]: fetch-offline: fetch-offline passed Dec 12 18:46:26.595768 ignition[762]: Ignition finished successfully Dec 12 18:46:26.600513 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:46:26.604239 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 18:46:26.605247 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:46:26.651265 ignition[846]: Ignition 2.22.0 Dec 12 18:46:26.651280 ignition[846]: Stage: kargs Dec 12 18:46:26.651476 ignition[846]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:46:26.651488 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:46:26.652582 ignition[846]: kargs: kargs passed Dec 12 18:46:26.652636 ignition[846]: Ignition finished successfully Dec 12 18:46:26.659099 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:46:26.679009 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:46:26.723848 ignition[854]: Ignition 2.22.0 Dec 12 18:46:26.723862 ignition[854]: Stage: disks Dec 12 18:46:26.724064 ignition[854]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:46:26.724078 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:46:26.725214 ignition[854]: disks: disks passed Dec 12 18:46:26.725266 ignition[854]: Ignition finished successfully Dec 12 18:46:26.735396 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:46:26.737536 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:46:26.741178 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:46:26.745559 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:46:26.745965 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:46:26.754558 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:46:26.755900 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:46:26.784001 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:46:26.850287 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:46:26.856197 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:46:26.969383 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:46:26.970729 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:46:26.973074 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:46:26.977433 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:46:26.980877 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:46:26.983372 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:46:26.983429 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:46:26.983459 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:46:27.004410 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (872) Dec 12 18:46:27.004439 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:46:27.004461 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:46:26.995049 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:46:27.009078 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:46:27.009095 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:46:27.006478 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:46:27.014230 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:46:27.050667 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:46:27.056988 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:46:27.062340 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:46:27.066691 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:46:27.177725 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:46:27.182894 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:46:27.186959 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:46:27.211362 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:46:27.228533 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:46:27.253402 ignition[986]: INFO : Ignition 2.22.0 Dec 12 18:46:27.253402 ignition[986]: INFO : Stage: mount Dec 12 18:46:27.256754 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:46:27.256754 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:46:27.256754 ignition[986]: INFO : mount: mount passed Dec 12 18:46:27.256754 ignition[986]: INFO : Ignition finished successfully Dec 12 18:46:27.268050 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:46:27.268649 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:46:27.272471 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:46:27.299884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:46:27.326881 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Dec 12 18:46:27.326948 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:46:27.326965 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:46:27.332713 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:46:27.332783 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:46:27.335140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:46:27.382078 ignition[1015]: INFO : Ignition 2.22.0 Dec 12 18:46:27.382078 ignition[1015]: INFO : Stage: files Dec 12 18:46:27.385292 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:46:27.385292 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:46:27.390195 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:46:27.392585 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:46:27.392585 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:46:27.400546 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:46:27.404311 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:46:27.404311 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:46:27.404311 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:46:27.404311 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 12 18:46:27.401614 unknown[1015]: wrote ssh authorized keys file for user: core Dec 12 18:46:27.447769 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:46:27.525264 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:46:27.529139 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:46:27.529139 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:46:27.529139 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:46:27.529139 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:46:27.529139 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:46:27.529139 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:46:27.529139 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:46:27.529139 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:46:27.554486 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:46:27.554486 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:46:27.554486 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:46:27.554486 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:46:27.554486 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:46:27.554486 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 18:46:27.534479 systemd-networkd[833]: eth0: Gained IPv6LL Dec 12 18:46:27.994791 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:46:28.525403 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:46:28.525403 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:46:28.531406 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:46:28.544016 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:46:28.544016 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:46:28.544016 ignition[1015]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 18:46:28.544016 ignition[1015]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 18:46:28.554805 ignition[1015]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 18:46:28.554805 ignition[1015]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 18:46:28.554805 ignition[1015]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 18:46:28.580284 ignition[1015]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 18:46:28.591303 ignition[1015]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 18:46:28.594123 ignition[1015]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 18:46:28.594123 ignition[1015]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:46:28.594123 ignition[1015]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:46:28.594123 ignition[1015]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:46:28.594123 ignition[1015]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:46:28.594123 ignition[1015]: INFO : files: files passed Dec 12 18:46:28.594123 ignition[1015]: INFO : Ignition finished successfully Dec 12 18:46:28.600639 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:46:28.607727 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:46:28.613992 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:46:28.631411 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:46:28.631549 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:46:28.633279 initrd-setup-root-after-ignition[1044]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 18:46:28.634943 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:46:28.634943 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:46:28.644926 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:46:28.646221 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:46:28.649954 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:46:28.655560 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:46:28.715097 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:46:28.715220 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:46:28.718930 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:46:28.722606 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:46:28.726637 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:46:28.728317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:46:28.764776 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:46:28.768374 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:46:28.796914 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:46:28.797180 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:46:28.802858 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:46:28.804925 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:46:28.805102 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:46:28.813442 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:46:28.815361 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:46:28.817161 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:46:28.819968 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:46:28.823374 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:46:28.829616 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:46:28.831704 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:46:28.835422 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:46:28.838825 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:46:28.842856 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:46:28.846219 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:46:28.849433 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:46:28.849569 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:46:28.855348 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:46:28.857121 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:46:28.860992 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:46:28.861501 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:46:28.864624 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:46:28.864750 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:46:28.873178 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:46:28.873303 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:46:28.874976 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:46:28.878262 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:46:28.882410 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:46:28.884759 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:46:28.886484 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:46:28.890197 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:46:28.890308 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:46:28.893034 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:46:28.893157 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:46:28.896080 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:46:28.896220 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:46:28.898913 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:46:28.899040 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:46:28.906537 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:46:28.908620 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:46:28.908737 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:46:28.915492 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:46:28.917490 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:46:28.917696 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:46:28.920630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:46:28.920826 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:46:28.938083 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:46:28.938239 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:46:28.951944 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:46:28.968093 ignition[1071]: INFO : Ignition 2.22.0 Dec 12 18:46:28.968093 ignition[1071]: INFO : Stage: umount Dec 12 18:46:28.970773 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:46:28.970773 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:46:28.970773 ignition[1071]: INFO : umount: umount passed Dec 12 18:46:28.970773 ignition[1071]: INFO : Ignition finished successfully Dec 12 18:46:28.972360 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:46:28.972491 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:46:28.976088 systemd[1]: Stopped target network.target - Network. Dec 12 18:46:28.979902 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:46:28.979971 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:46:28.981498 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:46:28.981554 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:46:28.984587 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:46:28.984675 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:46:28.994028 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:46:28.994117 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:46:28.997398 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:46:28.998996 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:46:29.004928 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:46:29.005077 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:46:29.010560 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:46:29.010888 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:46:29.010952 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:46:29.015663 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:46:29.032366 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:46:29.032503 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:46:29.038259 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:46:29.038539 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:46:29.098237 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:46:29.098297 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:46:29.103843 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:46:29.105428 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:46:29.105512 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:46:29.110382 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:46:29.110434 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:46:29.117475 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:46:29.117549 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:46:29.122527 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:46:29.128191 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:46:29.128936 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:46:29.129128 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:46:29.134434 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:46:29.134528 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:46:29.149721 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:46:29.149907 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:46:29.155222 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:46:29.155450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:46:29.157248 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:46:29.157318 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:46:29.162014 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:46:29.162115 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:46:29.166961 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:46:29.167073 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:46:29.169236 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:46:29.169317 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:46:29.179718 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:46:29.179820 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:46:29.188222 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:46:29.191841 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:46:29.191945 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:46:29.198677 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:46:29.198751 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:46:29.205188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:46:29.205286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:46:29.222773 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:46:29.222929 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:46:29.227435 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:46:29.233407 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:46:29.259849 systemd[1]: Switching root. Dec 12 18:46:29.302893 systemd-journald[203]: Journal stopped Dec 12 18:46:30.692634 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Dec 12 18:46:30.692728 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:46:30.692745 kernel: SELinux: policy capability open_perms=1 Dec 12 18:46:30.692760 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:46:30.692778 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:46:30.692792 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:46:30.692812 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:46:30.692826 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:46:30.692841 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:46:30.692855 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:46:30.692869 kernel: audit: type=1403 audit(1765565189.734:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:46:30.692884 systemd[1]: Successfully loaded SELinux policy in 76.782ms. Dec 12 18:46:30.692918 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.892ms. Dec 12 18:46:30.692937 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:46:30.692952 systemd[1]: Detected virtualization kvm. Dec 12 18:46:30.692966 systemd[1]: Detected architecture x86-64. Dec 12 18:46:30.692980 systemd[1]: Detected first boot. Dec 12 18:46:30.692995 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:46:30.693010 zram_generator::config[1117]: No configuration found. Dec 12 18:46:30.693037 kernel: Guest personality initialized and is inactive Dec 12 18:46:30.693051 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:46:30.693068 kernel: Initialized host personality Dec 12 18:46:30.693082 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:46:30.693096 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:46:30.693112 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:46:30.693127 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:46:30.693142 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:46:30.693158 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:46:30.693174 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:46:30.693192 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:46:30.693217 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:46:30.693238 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:46:30.693253 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:46:30.693269 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:46:30.693284 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:46:30.693300 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:46:30.693315 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:46:30.693346 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:46:30.693362 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:46:30.693380 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:46:30.693395 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:46:30.693411 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:46:30.693431 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:46:30.693447 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:46:30.693468 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:46:30.693483 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:46:30.693501 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:46:30.693516 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:46:30.693531 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:46:30.693546 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:46:30.693564 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:46:30.693579 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:46:30.693595 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:46:30.693610 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:46:30.693625 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:46:30.693640 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:46:30.693658 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:46:30.693674 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:46:30.693689 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:46:30.693705 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:46:30.693721 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:46:30.693736 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:46:30.693752 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:46:30.693768 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:46:30.693783 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:46:30.693802 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:46:30.693818 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:46:30.693834 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:46:30.693854 systemd[1]: Reached target machines.target - Containers. Dec 12 18:46:30.693869 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:46:30.694401 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:46:30.694424 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:46:30.694440 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:46:30.694477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:46:30.694493 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:46:30.694509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:46:30.694524 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:46:30.694540 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:46:30.694557 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:46:30.694573 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:46:30.694589 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:46:30.694608 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:46:30.694624 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:46:30.694641 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:46:30.694657 kernel: fuse: init (API version 7.41) Dec 12 18:46:30.694672 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:46:30.694688 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:46:30.694704 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:46:30.694719 kernel: loop: module loaded Dec 12 18:46:30.694734 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:46:30.694757 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:46:30.694773 kernel: ACPI: bus type drm_connector registered Dec 12 18:46:30.694789 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:46:30.694849 systemd-journald[1195]: Collecting audit messages is disabled. Dec 12 18:46:30.694884 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:46:30.694901 systemd[1]: Stopped verity-setup.service. Dec 12 18:46:30.694918 systemd-journald[1195]: Journal started Dec 12 18:46:30.694951 systemd-journald[1195]: Runtime Journal (/run/log/journal/f7742793b6e548978681dc45582b52cc) is 5.9M, max 47.9M, 41.9M free. Dec 12 18:46:30.298383 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:46:30.320221 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 18:46:30.320831 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:46:30.698507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:46:30.703375 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:46:30.705642 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:46:30.707798 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:46:30.709838 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:46:30.711626 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:46:30.713621 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:46:30.715657 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:46:30.717647 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:46:30.719961 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:46:30.722429 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:46:30.722651 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:46:30.724947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:46:30.725179 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:46:30.727426 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:46:30.727650 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:46:30.729945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:46:30.730164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:46:30.732621 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:46:30.732840 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:46:30.735006 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:46:30.735236 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:46:30.737595 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:46:30.739916 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:46:30.742437 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:46:30.744948 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:46:30.759004 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:46:30.762423 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:46:30.765397 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:46:30.767411 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:46:30.767449 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:46:30.771120 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:46:30.777437 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:46:30.779355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:46:30.782266 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:46:30.786661 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:46:30.789910 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:46:30.791480 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:46:30.794479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:46:30.797648 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:46:30.797907 systemd-journald[1195]: Time spent on flushing to /var/log/journal/f7742793b6e548978681dc45582b52cc is 38.825ms for 1039 entries. Dec 12 18:46:30.797907 systemd-journald[1195]: System Journal (/var/log/journal/f7742793b6e548978681dc45582b52cc) is 8M, max 195.6M, 187.6M free. Dec 12 18:46:30.856709 systemd-journald[1195]: Received client request to flush runtime journal. Dec 12 18:46:30.856794 kernel: loop0: detected capacity change from 0 to 110984 Dec 12 18:46:30.804242 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:46:30.808577 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:46:30.816077 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:46:30.818550 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:46:30.821311 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:46:30.836800 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:46:30.841268 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:46:30.846625 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:46:30.855896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:46:30.866088 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:46:30.882189 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:46:30.886891 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:46:30.896354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:46:30.899934 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:46:30.918259 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Dec 12 18:46:30.918279 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Dec 12 18:46:30.923372 kernel: loop1: detected capacity change from 0 to 128560 Dec 12 18:46:30.924124 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:46:30.957368 kernel: loop2: detected capacity change from 0 to 224512 Dec 12 18:46:31.083381 kernel: loop3: detected capacity change from 0 to 110984 Dec 12 18:46:31.189362 kernel: loop4: detected capacity change from 0 to 128560 Dec 12 18:46:31.199574 kernel: loop5: detected capacity change from 0 to 224512 Dec 12 18:46:31.215406 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 18:46:31.216403 (sd-merge)[1261]: Merged extensions into '/usr'. Dec 12 18:46:31.248877 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:46:31.248896 systemd[1]: Reloading... Dec 12 18:46:31.420354 zram_generator::config[1287]: No configuration found. Dec 12 18:46:31.697639 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:46:31.698076 systemd[1]: Reloading finished in 448 ms. Dec 12 18:46:31.771358 ldconfig[1231]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:46:31.775012 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:46:31.793139 systemd[1]: Starting ensure-sysext.service... Dec 12 18:46:31.795549 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:46:31.806201 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:46:31.841301 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:46:31.841984 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:46:31.842433 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:46:31.842772 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:46:31.844012 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:46:31.844402 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Dec 12 18:46:31.844488 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Dec 12 18:46:31.844782 systemd[1]: Reload requested from client PID 1323 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:46:31.844816 systemd[1]: Reloading... Dec 12 18:46:31.849656 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:46:31.849674 systemd-tmpfiles[1324]: Skipping /boot Dec 12 18:46:31.861295 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:46:31.861315 systemd-tmpfiles[1324]: Skipping /boot Dec 12 18:46:31.903358 zram_generator::config[1352]: No configuration found. Dec 12 18:46:32.115846 systemd[1]: Reloading finished in 270 ms. Dec 12 18:46:32.142607 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:46:32.170451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:46:32.181644 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:46:32.185934 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:46:32.211134 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:46:32.216703 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:46:32.222537 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:46:32.228753 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:46:32.235395 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:46:32.235766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:46:32.240466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:46:32.244934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:46:32.251895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:46:32.254248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:46:32.254474 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:46:32.257055 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:46:32.259349 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:46:32.261280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:46:32.263141 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:46:32.266547 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:46:32.266824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:46:32.269905 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:46:32.270153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:46:32.281934 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:46:32.282249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:46:32.286608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:46:32.290895 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:46:32.329633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:46:32.332021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:46:32.332179 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:46:32.332311 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:46:32.334293 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:46:32.335023 systemd-udevd[1398]: Using default interface naming scheme 'v255'. Dec 12 18:46:32.340151 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:46:32.344104 augenrules[1426]: No rules Dec 12 18:46:32.344502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:46:32.352428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:46:32.355767 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:46:32.356108 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:46:32.359091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:46:32.359471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:46:32.362578 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:46:32.365730 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:46:32.365951 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:46:32.379393 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:46:32.382068 systemd[1]: Finished ensure-sysext.service. Dec 12 18:46:32.383514 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:46:32.388492 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:46:32.390679 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:46:32.392407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:46:32.395003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:46:32.400500 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:46:32.410024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:46:32.463078 augenrules[1448]: /sbin/augenrules: No change Dec 12 18:46:32.463592 augenrules[1482]: No rules Dec 12 18:46:32.467023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:46:32.469638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:46:32.469707 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:46:32.472569 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:46:32.484190 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:46:32.487588 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:46:32.489611 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:46:32.489642 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:46:32.492026 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:46:32.492404 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:46:32.494545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:46:32.494779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:46:32.496965 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:46:32.497668 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:46:32.499938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:46:32.500173 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:46:32.502526 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:46:32.502781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:46:32.518608 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:46:32.529876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:46:32.529956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:46:32.617768 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:46:32.622105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:46:32.628990 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:46:32.655889 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:46:32.656397 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:46:32.660362 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:46:32.669354 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:46:32.697742 systemd-resolved[1394]: Positive Trust Anchors: Dec 12 18:46:32.697763 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:46:32.697802 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:46:32.703706 systemd-resolved[1394]: Defaulting to hostname 'linux'. Dec 12 18:46:32.707846 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 12 18:46:32.708243 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:46:32.710371 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:46:32.712310 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:46:32.714751 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:46:32.792183 systemd-networkd[1493]: lo: Link UP Dec 12 18:46:32.792195 systemd-networkd[1493]: lo: Gained carrier Dec 12 18:46:32.794222 systemd-networkd[1493]: Enumeration completed Dec 12 18:46:32.794358 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:46:32.794965 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:46:32.794988 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:46:32.795696 systemd-networkd[1493]: eth0: Link UP Dec 12 18:46:32.795933 systemd-networkd[1493]: eth0: Gained carrier Dec 12 18:46:32.795959 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:46:32.796727 systemd[1]: Reached target network.target - Network. Dec 12 18:46:32.800296 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:46:32.806649 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:46:32.810385 systemd-networkd[1493]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 18:46:32.831130 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:46:32.838609 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:46:32.840579 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:46:32.842798 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:46:32.844985 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:46:32.846945 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:46:32.849107 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:46:32.849132 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:46:32.850680 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:46:32.852755 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:46:33.662476 systemd-resolved[1394]: Clock change detected. Flushing caches. Dec 12 18:46:33.662673 systemd-timesyncd[1495]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 18:46:33.662725 systemd-timesyncd[1495]: Initial clock synchronization to Fri 2025-12-12 18:46:33.662412 UTC. Dec 12 18:46:33.663421 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:46:33.666038 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:46:33.671085 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:46:33.730322 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:46:33.736830 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:46:33.739292 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:46:33.741788 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:46:33.753554 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:46:33.756521 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:46:33.759951 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:46:33.762600 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:46:33.768518 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:46:33.770636 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:46:33.772566 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:46:33.772628 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:46:33.774790 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:46:33.778048 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:46:33.784036 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:46:33.789680 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:46:33.794145 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:46:33.796259 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:46:33.806085 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:46:33.809829 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:46:33.811975 jq[1546]: false Dec 12 18:46:33.815734 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:46:33.821920 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:46:33.829740 extend-filesystems[1547]: Found /dev/vda6 Dec 12 18:46:33.839682 extend-filesystems[1547]: Found /dev/vda9 Dec 12 18:46:33.837087 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:46:33.842937 extend-filesystems[1547]: Checking size of /dev/vda9 Dec 12 18:46:33.848437 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:46:33.851829 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:46:33.854081 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:46:33.861017 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:46:33.865098 extend-filesystems[1547]: Resized partition /dev/vda9 Dec 12 18:46:33.867225 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:46:33.868334 extend-filesystems[1572]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:46:33.872886 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing passwd entry cache Dec 12 18:46:33.869000 oslogin_cache_refresh[1548]: Refreshing passwd entry cache Dec 12 18:46:33.876808 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 18:46:33.877413 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:46:33.880733 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting users, quitting Dec 12 18:46:33.880733 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:46:33.880733 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing group entry cache Dec 12 18:46:33.879998 oslogin_cache_refresh[1548]: Failure getting users, quitting Dec 12 18:46:33.880765 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:46:33.880028 oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:46:33.880099 oslogin_cache_refresh[1548]: Refreshing group entry cache Dec 12 18:46:33.881125 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:46:33.881579 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:46:33.882057 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:46:33.889334 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:46:33.893461 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting groups, quitting Dec 12 18:46:33.893461 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:46:33.891916 oslogin_cache_refresh[1548]: Failure getting groups, quitting Dec 12 18:46:33.891932 oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:46:33.916647 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 18:46:33.929766 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:46:33.942317 jq[1571]: true Dec 12 18:46:33.933011 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:46:33.942855 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 18:46:33.942855 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 18:46:33.942855 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 18:46:33.933296 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:46:33.953731 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Dec 12 18:46:33.949200 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:46:33.949698 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:46:33.977629 jq[1578]: true Dec 12 18:46:33.995345 update_engine[1566]: I20251212 18:46:33.994843 1566 main.cc:92] Flatcar Update Engine starting Dec 12 18:46:33.995507 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:46:34.002283 tar[1575]: linux-amd64/LICENSE Dec 12 18:46:34.002283 tar[1575]: linux-amd64/helm Dec 12 18:46:34.081403 kernel: kvm_amd: TSC scaling supported Dec 12 18:46:34.081485 kernel: kvm_amd: Nested Virtualization enabled Dec 12 18:46:34.081499 kernel: kvm_amd: Nested Paging enabled Dec 12 18:46:34.081512 kernel: kvm_amd: LBR virtualization supported Dec 12 18:46:34.083218 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 12 18:46:34.083287 kernel: kvm_amd: Virtual GIF supported Dec 12 18:46:34.115984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:46:34.126634 systemd-logind[1562]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:46:34.128636 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:46:34.128969 systemd-logind[1562]: New seat seat0. Dec 12 18:46:34.135432 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:46:34.194688 dbus-daemon[1544]: [system] SELinux support is enabled Dec 12 18:46:34.194928 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:46:34.201689 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:46:34.201734 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:46:34.204631 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:46:34.204665 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:46:34.213788 dbus-daemon[1544]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 18:46:34.215501 update_engine[1566]: I20251212 18:46:34.214938 1566 update_check_scheduler.cc:74] Next update check in 2m10s Dec 12 18:46:34.215353 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:46:34.217680 bash[1609]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:46:34.218915 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:46:34.219483 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:46:34.222883 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 18:46:34.239604 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:46:34.278016 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:46:34.313125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:46:34.447017 containerd[1579]: time="2025-12-12T18:46:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:46:34.447937 containerd[1579]: time="2025-12-12T18:46:34.447906263Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:46:34.527368 containerd[1579]: time="2025-12-12T18:46:34.527307520Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.191µs" Dec 12 18:46:34.527508 containerd[1579]: time="2025-12-12T18:46:34.527489652Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:46:34.527565 containerd[1579]: time="2025-12-12T18:46:34.527553331Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:46:34.527820 containerd[1579]: time="2025-12-12T18:46:34.527804181Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:46:34.527881 containerd[1579]: time="2025-12-12T18:46:34.527869474Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:46:34.527974 containerd[1579]: time="2025-12-12T18:46:34.527960444Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:46:34.528084 containerd[1579]: time="2025-12-12T18:46:34.528068527Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:46:34.528192 containerd[1579]: time="2025-12-12T18:46:34.528148016Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:46:34.528561 containerd[1579]: time="2025-12-12T18:46:34.528541684Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:46:34.528629 containerd[1579]: time="2025-12-12T18:46:34.528616825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:46:34.528677 containerd[1579]: time="2025-12-12T18:46:34.528665156Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:46:34.528737 containerd[1579]: time="2025-12-12T18:46:34.528725028Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:46:34.528870 containerd[1579]: time="2025-12-12T18:46:34.528856224Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:46:34.529161 containerd[1579]: time="2025-12-12T18:46:34.529144094Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:46:34.529258 containerd[1579]: time="2025-12-12T18:46:34.529244081Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:46:34.529311 containerd[1579]: time="2025-12-12T18:46:34.529299525Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:46:34.529389 containerd[1579]: time="2025-12-12T18:46:34.529377391Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:46:34.529704 containerd[1579]: time="2025-12-12T18:46:34.529687372Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:46:34.529820 containerd[1579]: time="2025-12-12T18:46:34.529800785Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:46:34.781276 tar[1575]: linux-amd64/README.md Dec 12 18:46:34.812076 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:46:34.849226 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:46:34.849751 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:46:34.854389 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:46:34.884728 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:46:34.885053 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:46:34.888879 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:46:34.936265 containerd[1579]: time="2025-12-12T18:46:34.936169246Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:46:34.936397 containerd[1579]: time="2025-12-12T18:46:34.936287237Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:46:34.936397 containerd[1579]: time="2025-12-12T18:46:34.936306082Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:46:34.936397 containerd[1579]: time="2025-12-12T18:46:34.936321531Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:46:34.936397 containerd[1579]: time="2025-12-12T18:46:34.936337201Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:46:34.936397 containerd[1579]: time="2025-12-12T18:46:34.936348903Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:46:34.936397 containerd[1579]: time="2025-12-12T18:46:34.936360374Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:46:34.936397 containerd[1579]: time="2025-12-12T18:46:34.936374260Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:46:34.936397 containerd[1579]: time="2025-12-12T18:46:34.936388016Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:46:34.936631 containerd[1579]: time="2025-12-12T18:46:34.936414796Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:46:34.936631 containerd[1579]: time="2025-12-12T18:46:34.936425607Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:46:34.936631 containerd[1579]: time="2025-12-12T18:46:34.936440875Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:46:34.936713 containerd[1579]: time="2025-12-12T18:46:34.936645619Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:46:34.936713 containerd[1579]: time="2025-12-12T18:46:34.936673371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:46:34.936713 containerd[1579]: time="2025-12-12T18:46:34.936694431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:46:34.936713 containerd[1579]: time="2025-12-12T18:46:34.936704720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:46:34.936805 containerd[1579]: time="2025-12-12T18:46:34.936714899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:46:34.936805 containerd[1579]: time="2025-12-12T18:46:34.936724397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:46:34.936805 containerd[1579]: time="2025-12-12T18:46:34.936753060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:46:34.936805 containerd[1579]: time="2025-12-12T18:46:34.936765123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:46:34.936805 containerd[1579]: time="2025-12-12T18:46:34.936777637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:46:34.936805 containerd[1579]: time="2025-12-12T18:46:34.936789088Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:46:34.936805 containerd[1579]: time="2025-12-12T18:46:34.936799828Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:46:34.936987 containerd[1579]: time="2025-12-12T18:46:34.936877895Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:46:34.936987 containerd[1579]: time="2025-12-12T18:46:34.936899926Z" level=info msg="Start snapshots syncer" Dec 12 18:46:34.936987 containerd[1579]: time="2025-12-12T18:46:34.936928099Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:46:34.937330 containerd[1579]: time="2025-12-12T18:46:34.937273436Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:46:34.937451 containerd[1579]: time="2025-12-12T18:46:34.937331415Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:46:34.938785 containerd[1579]: time="2025-12-12T18:46:34.938764863Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:46:34.938939 containerd[1579]: time="2025-12-12T18:46:34.938917188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:46:34.938971 containerd[1579]: time="2025-12-12T18:46:34.938946663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:46:34.938971 containerd[1579]: time="2025-12-12T18:46:34.938958225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:46:34.938971 containerd[1579]: time="2025-12-12T18:46:34.938967713Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:46:34.939054 containerd[1579]: time="2025-12-12T18:46:34.938991748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:46:34.939054 containerd[1579]: time="2025-12-12T18:46:34.939005123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:46:34.939054 containerd[1579]: time="2025-12-12T18:46:34.939022245Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:46:34.939054 containerd[1579]: time="2025-12-12T18:46:34.939045529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:46:34.939162 containerd[1579]: time="2025-12-12T18:46:34.939054796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:46:34.939162 containerd[1579]: time="2025-12-12T18:46:34.939066348Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:46:34.939162 containerd[1579]: time="2025-12-12T18:46:34.939106663Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:46:34.939162 containerd[1579]: time="2025-12-12T18:46:34.939128044Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:46:34.939162 containerd[1579]: time="2025-12-12T18:46:34.939136970Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:46:34.939162 containerd[1579]: time="2025-12-12T18:46:34.939145366Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:46:34.939162 containerd[1579]: time="2025-12-12T18:46:34.939152690Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:46:34.939347 containerd[1579]: time="2025-12-12T18:46:34.939169451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:46:34.939347 containerd[1579]: time="2025-12-12T18:46:34.939214335Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:46:34.939347 containerd[1579]: time="2025-12-12T18:46:34.939240905Z" level=info msg="runtime interface created" Dec 12 18:46:34.939347 containerd[1579]: time="2025-12-12T18:46:34.939246165Z" level=info msg="created NRI interface" Dec 12 18:46:34.939347 containerd[1579]: time="2025-12-12T18:46:34.939253739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:46:34.939347 containerd[1579]: time="2025-12-12T18:46:34.939269900Z" level=info msg="Connect containerd service" Dec 12 18:46:34.939347 containerd[1579]: time="2025-12-12T18:46:34.939289005Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:46:34.941255 containerd[1579]: time="2025-12-12T18:46:34.941226258Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:46:34.949874 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:46:34.954040 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:46:34.957013 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:46:34.959263 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:46:35.098489 containerd[1579]: time="2025-12-12T18:46:35.098344234Z" level=info msg="Start subscribing containerd event" Dec 12 18:46:35.098627 containerd[1579]: time="2025-12-12T18:46:35.098460081Z" level=info msg="Start recovering state" Dec 12 18:46:35.098655 containerd[1579]: time="2025-12-12T18:46:35.098609732Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:46:35.098698 containerd[1579]: time="2025-12-12T18:46:35.098680715Z" level=info msg="Start event monitor" Dec 12 18:46:35.098728 containerd[1579]: time="2025-12-12T18:46:35.098702616Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:46:35.098728 containerd[1579]: time="2025-12-12T18:46:35.098712885Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:46:35.098774 containerd[1579]: time="2025-12-12T18:46:35.098716883Z" level=info msg="Start streaming server" Dec 12 18:46:35.098774 containerd[1579]: time="2025-12-12T18:46:35.098749183Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:46:35.098774 containerd[1579]: time="2025-12-12T18:46:35.098763039Z" level=info msg="runtime interface starting up..." Dec 12 18:46:35.098774 containerd[1579]: time="2025-12-12T18:46:35.098773469Z" level=info msg="starting plugins..." Dec 12 18:46:35.098890 containerd[1579]: time="2025-12-12T18:46:35.098799167Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:46:35.099023 containerd[1579]: time="2025-12-12T18:46:35.099004923Z" level=info msg="containerd successfully booted in 0.652685s" Dec 12 18:46:35.099156 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:46:35.446869 systemd-networkd[1493]: eth0: Gained IPv6LL Dec 12 18:46:35.450553 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:46:35.453548 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:46:35.457571 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 18:46:35.461257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:46:35.480355 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:46:35.556119 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:46:35.558951 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 18:46:35.559222 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 18:46:35.563302 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:46:37.379492 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:46:37.386344 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:49726.service - OpenSSH per-connection server daemon (10.0.0.1:49726). Dec 12 18:46:37.595607 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 49726 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:46:37.603094 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:37.622564 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:46:37.629983 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:46:37.656762 systemd-logind[1562]: New session 1 of user core. Dec 12 18:46:37.676985 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:46:37.695667 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:46:37.724240 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:46:37.734040 systemd-logind[1562]: New session c1 of user core. Dec 12 18:46:38.038872 systemd[1686]: Queued start job for default target default.target. Dec 12 18:46:38.065213 systemd[1686]: Created slice app.slice - User Application Slice. Dec 12 18:46:38.065255 systemd[1686]: Reached target paths.target - Paths. Dec 12 18:46:38.065319 systemd[1686]: Reached target timers.target - Timers. Dec 12 18:46:38.071393 systemd[1686]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:46:38.106949 systemd[1686]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:46:38.107140 systemd[1686]: Reached target sockets.target - Sockets. Dec 12 18:46:38.107206 systemd[1686]: Reached target basic.target - Basic System. Dec 12 18:46:38.107266 systemd[1686]: Reached target default.target - Main User Target. Dec 12 18:46:38.107312 systemd[1686]: Startup finished in 351ms. Dec 12 18:46:38.107873 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:46:38.180965 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:46:38.271896 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:49734.service - OpenSSH per-connection server daemon (10.0.0.1:49734). Dec 12 18:46:38.422149 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 49734 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:46:38.424940 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:38.447481 systemd-logind[1562]: New session 2 of user core. Dec 12 18:46:38.461914 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:46:38.575516 sshd[1700]: Connection closed by 10.0.0.1 port 49734 Dec 12 18:46:38.578325 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:38.598213 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:49734.service: Deactivated successfully. Dec 12 18:46:38.604348 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:46:38.607888 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:46:38.611286 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:49744.service - OpenSSH per-connection server daemon (10.0.0.1:49744). Dec 12 18:46:38.620189 systemd-logind[1562]: Removed session 2. Dec 12 18:46:38.704740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:46:38.709076 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 49744 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:46:38.709818 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:46:38.710104 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:38.712427 systemd[1]: Startup finished in 4.542s (kernel) + 6.936s (initrd) + 8.243s (userspace) = 19.722s. Dec 12 18:46:38.720349 systemd-logind[1562]: New session 3 of user core. Dec 12 18:46:38.723080 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:46:38.724406 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:46:38.801890 sshd[1716]: Connection closed by 10.0.0.1 port 49744 Dec 12 18:46:38.802540 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:38.813694 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:49744.service: Deactivated successfully. Dec 12 18:46:38.817441 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:46:38.819711 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:46:38.821704 systemd-logind[1562]: Removed session 3. Dec 12 18:46:39.921459 kubelet[1714]: E1212 18:46:39.921267 1714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:46:39.939751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:46:39.940731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:46:39.942174 systemd[1]: kubelet.service: Consumed 2.897s CPU time, 264.2M memory peak. Dec 12 18:46:48.818843 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:52974.service - OpenSSH per-connection server daemon (10.0.0.1:52974). Dec 12 18:46:48.885534 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 52974 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:46:48.887319 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:48.892067 systemd-logind[1562]: New session 4 of user core. Dec 12 18:46:48.901797 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:46:48.956850 sshd[1735]: Connection closed by 10.0.0.1 port 52974 Dec 12 18:46:48.957263 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:48.967227 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:52974.service: Deactivated successfully. Dec 12 18:46:48.969217 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:46:48.969970 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:46:48.972700 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:52988.service - OpenSSH per-connection server daemon (10.0.0.1:52988). Dec 12 18:46:48.973300 systemd-logind[1562]: Removed session 4. Dec 12 18:46:49.035940 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 52988 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:46:49.037448 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:49.042231 systemd-logind[1562]: New session 5 of user core. Dec 12 18:46:49.051819 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:46:49.102482 sshd[1744]: Connection closed by 10.0.0.1 port 52988 Dec 12 18:46:49.102702 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:49.115615 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:52988.service: Deactivated successfully. Dec 12 18:46:49.117359 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:46:49.118074 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:46:49.120516 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:53002.service - OpenSSH per-connection server daemon (10.0.0.1:53002). Dec 12 18:46:49.121125 systemd-logind[1562]: Removed session 5. Dec 12 18:46:49.178459 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 53002 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:46:49.180308 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:49.185260 systemd-logind[1562]: New session 6 of user core. Dec 12 18:46:49.195734 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:46:49.249741 sshd[1753]: Connection closed by 10.0.0.1 port 53002 Dec 12 18:46:49.250124 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:49.264649 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:53002.service: Deactivated successfully. Dec 12 18:46:49.266836 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:46:49.267673 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:46:49.270629 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:53004.service - OpenSSH per-connection server daemon (10.0.0.1:53004). Dec 12 18:46:49.271345 systemd-logind[1562]: Removed session 6. Dec 12 18:46:49.330475 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 53004 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:46:49.331993 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:49.336199 systemd-logind[1562]: New session 7 of user core. Dec 12 18:46:49.345729 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:46:49.405895 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:46:49.406284 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:46:49.424616 sudo[1764]: pam_unix(sudo:session): session closed for user root Dec 12 18:46:49.426501 sshd[1763]: Connection closed by 10.0.0.1 port 53004 Dec 12 18:46:49.427118 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:49.446116 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:53004.service: Deactivated successfully. Dec 12 18:46:49.448101 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:46:49.448995 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:46:49.451679 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:53014.service - OpenSSH per-connection server daemon (10.0.0.1:53014). Dec 12 18:46:49.452438 systemd-logind[1562]: Removed session 7. Dec 12 18:46:49.522551 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 53014 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:46:49.524508 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:49.531620 systemd-logind[1562]: New session 8 of user core. Dec 12 18:46:49.541899 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:46:49.599612 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:46:49.600022 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:46:49.829281 sudo[1775]: pam_unix(sudo:session): session closed for user root Dec 12 18:46:49.835828 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:46:49.836148 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:46:49.846236 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:46:49.893993 augenrules[1797]: No rules Dec 12 18:46:49.895879 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:46:49.896161 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:46:49.897304 sudo[1774]: pam_unix(sudo:session): session closed for user root Dec 12 18:46:49.898971 sshd[1773]: Connection closed by 10.0.0.1 port 53014 Dec 12 18:46:49.899337 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:49.912310 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:53014.service: Deactivated successfully. Dec 12 18:46:49.914197 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:46:49.914885 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:46:49.917436 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:53018.service - OpenSSH per-connection server daemon (10.0.0.1:53018). Dec 12 18:46:49.917983 systemd-logind[1562]: Removed session 8. Dec 12 18:46:49.958604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:46:49.960321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:46:49.973435 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 53018 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:46:49.975167 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:49.980605 systemd-logind[1562]: New session 9 of user core. Dec 12 18:46:49.986817 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:46:50.040275 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:46:50.040626 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:46:50.255116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:46:50.276238 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:46:50.411866 kubelet[1830]: E1212 18:46:50.411791 1830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:46:50.419032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:46:50.419265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:46:50.419770 systemd[1]: kubelet.service: Consumed 340ms CPU time, 111.3M memory peak. Dec 12 18:46:50.955743 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:46:50.980286 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:46:51.713344 dockerd[1849]: time="2025-12-12T18:46:51.713246531Z" level=info msg="Starting up" Dec 12 18:46:51.714267 dockerd[1849]: time="2025-12-12T18:46:51.714205089Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:46:51.738719 dockerd[1849]: time="2025-12-12T18:46:51.738634313Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:46:52.088963 dockerd[1849]: time="2025-12-12T18:46:52.088818869Z" level=info msg="Loading containers: start." Dec 12 18:46:52.102638 kernel: Initializing XFRM netlink socket Dec 12 18:46:52.411343 systemd-networkd[1493]: docker0: Link UP Dec 12 18:46:52.418701 dockerd[1849]: time="2025-12-12T18:46:52.418651305Z" level=info msg="Loading containers: done." Dec 12 18:46:52.438387 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1254855407-merged.mount: Deactivated successfully. Dec 12 18:46:52.443299 dockerd[1849]: time="2025-12-12T18:46:52.443250819Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:46:52.443395 dockerd[1849]: time="2025-12-12T18:46:52.443370684Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:46:52.443507 dockerd[1849]: time="2025-12-12T18:46:52.443483265Z" level=info msg="Initializing buildkit" Dec 12 18:46:52.480848 dockerd[1849]: time="2025-12-12T18:46:52.480785186Z" level=info msg="Completed buildkit initialization" Dec 12 18:46:52.486663 dockerd[1849]: time="2025-12-12T18:46:52.486623864Z" level=info msg="Daemon has completed initialization" Dec 12 18:46:52.486813 dockerd[1849]: time="2025-12-12T18:46:52.486727919Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:46:52.486926 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:46:53.406174 containerd[1579]: time="2025-12-12T18:46:53.406117314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 18:46:55.364861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576736583.mount: Deactivated successfully. Dec 12 18:46:56.709613 containerd[1579]: time="2025-12-12T18:46:56.709533628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:56.710571 containerd[1579]: time="2025-12-12T18:46:56.710527281Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Dec 12 18:46:56.711996 containerd[1579]: time="2025-12-12T18:46:56.711943637Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:56.714439 containerd[1579]: time="2025-12-12T18:46:56.714402638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:56.715289 containerd[1579]: time="2025-12-12T18:46:56.715262029Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 3.309098038s" Dec 12 18:46:56.715359 containerd[1579]: time="2025-12-12T18:46:56.715297155Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 12 18:46:56.715911 containerd[1579]: time="2025-12-12T18:46:56.715888153Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 18:46:58.290977 containerd[1579]: time="2025-12-12T18:46:58.290795624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:58.294211 containerd[1579]: time="2025-12-12T18:46:58.293628937Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Dec 12 18:46:58.298961 containerd[1579]: time="2025-12-12T18:46:58.296540626Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:58.301712 containerd[1579]: time="2025-12-12T18:46:58.300327106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:58.301712 containerd[1579]: time="2025-12-12T18:46:58.301299819Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.585379656s" Dec 12 18:46:58.302546 containerd[1579]: time="2025-12-12T18:46:58.302221197Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 12 18:46:58.303022 containerd[1579]: time="2025-12-12T18:46:58.302993495Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 18:46:59.513224 containerd[1579]: time="2025-12-12T18:46:59.513157282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:59.513977 containerd[1579]: time="2025-12-12T18:46:59.513940110Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Dec 12 18:46:59.515131 containerd[1579]: time="2025-12-12T18:46:59.515093502Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:59.517860 containerd[1579]: time="2025-12-12T18:46:59.517817800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:59.518965 containerd[1579]: time="2025-12-12T18:46:59.518925437Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.215898228s" Dec 12 18:46:59.519003 containerd[1579]: time="2025-12-12T18:46:59.518963328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 12 18:46:59.519562 containerd[1579]: time="2025-12-12T18:46:59.519408453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 18:47:00.666148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:47:00.667946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:47:00.886678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:47:00.902992 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:47:01.116703 kubelet[2141]: E1212 18:47:01.116538 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:47:01.120883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:47:01.121117 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:47:01.121526 systemd[1]: kubelet.service: Consumed 401ms CPU time, 110.8M memory peak. Dec 12 18:47:01.702398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853894774.mount: Deactivated successfully. Dec 12 18:47:02.415752 containerd[1579]: time="2025-12-12T18:47:02.415677306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:02.417365 containerd[1579]: time="2025-12-12T18:47:02.417305639Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Dec 12 18:47:02.418905 containerd[1579]: time="2025-12-12T18:47:02.418829436Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:02.423343 containerd[1579]: time="2025-12-12T18:47:02.422737063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:02.423813 containerd[1579]: time="2025-12-12T18:47:02.423742148Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 2.90429375s" Dec 12 18:47:02.423813 containerd[1579]: time="2025-12-12T18:47:02.423811418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 18:47:02.424514 containerd[1579]: time="2025-12-12T18:47:02.424479029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 18:47:03.024865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2053743999.mount: Deactivated successfully. Dec 12 18:47:03.991970 containerd[1579]: time="2025-12-12T18:47:03.991893631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:03.992746 containerd[1579]: time="2025-12-12T18:47:03.992697088Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Dec 12 18:47:03.994547 containerd[1579]: time="2025-12-12T18:47:03.994488467Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:03.997116 containerd[1579]: time="2025-12-12T18:47:03.997082400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:03.998254 containerd[1579]: time="2025-12-12T18:47:03.998221356Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.573705077s" Dec 12 18:47:03.998308 containerd[1579]: time="2025-12-12T18:47:03.998257534Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 12 18:47:03.998732 containerd[1579]: time="2025-12-12T18:47:03.998701386Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:47:04.599773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160964031.mount: Deactivated successfully. Dec 12 18:47:04.606118 containerd[1579]: time="2025-12-12T18:47:04.606080632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:47:04.606818 containerd[1579]: time="2025-12-12T18:47:04.606789952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:47:04.608049 containerd[1579]: time="2025-12-12T18:47:04.608008927Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:47:04.610008 containerd[1579]: time="2025-12-12T18:47:04.609972589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:47:04.610570 containerd[1579]: time="2025-12-12T18:47:04.610539973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 611.811997ms" Dec 12 18:47:04.610570 containerd[1579]: time="2025-12-12T18:47:04.610565651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:47:04.611036 containerd[1579]: time="2025-12-12T18:47:04.611006979Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 18:47:05.177834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202921394.mount: Deactivated successfully. Dec 12 18:47:07.709082 containerd[1579]: time="2025-12-12T18:47:07.708916217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:07.711685 containerd[1579]: time="2025-12-12T18:47:07.711632069Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Dec 12 18:47:07.715069 containerd[1579]: time="2025-12-12T18:47:07.713878690Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:07.721288 containerd[1579]: time="2025-12-12T18:47:07.719516711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:07.721288 containerd[1579]: time="2025-12-12T18:47:07.720784073Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.109748239s" Dec 12 18:47:07.721288 containerd[1579]: time="2025-12-12T18:47:07.720818699Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 12 18:47:10.637318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:47:10.637943 systemd[1]: kubelet.service: Consumed 401ms CPU time, 110.8M memory peak. Dec 12 18:47:10.649374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:47:10.740815 systemd[1]: Reload requested from client PID 2298 ('systemctl') (unit session-9.scope)... Dec 12 18:47:10.740841 systemd[1]: Reloading... Dec 12 18:47:10.934685 zram_generator::config[2341]: No configuration found. Dec 12 18:47:11.647064 systemd[1]: Reloading finished in 905 ms. Dec 12 18:47:11.722714 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:47:11.722832 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:47:11.723202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:47:11.723269 systemd[1]: kubelet.service: Consumed 207ms CPU time, 98.2M memory peak. Dec 12 18:47:11.725016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:47:11.961041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:47:11.972125 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:47:12.345902 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:47:12.345902 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:47:12.345902 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:47:12.345902 kubelet[2389]: I1212 18:47:12.345875 2389 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:47:13.951356 kubelet[2389]: I1212 18:47:13.951290 2389 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:47:13.951356 kubelet[2389]: I1212 18:47:13.951334 2389 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:47:13.951812 kubelet[2389]: I1212 18:47:13.951734 2389 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:47:13.995750 kubelet[2389]: E1212 18:47:13.995692 2389 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:47:13.997014 kubelet[2389]: I1212 18:47:13.996985 2389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:47:14.005340 kubelet[2389]: I1212 18:47:14.005310 2389 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:47:14.010653 kubelet[2389]: I1212 18:47:14.010185 2389 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:47:14.015298 kubelet[2389]: I1212 18:47:14.015214 2389 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:47:14.015522 kubelet[2389]: I1212 18:47:14.015281 2389 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:47:14.015522 kubelet[2389]: I1212 18:47:14.015519 2389 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:47:14.015733 kubelet[2389]: I1212 18:47:14.015531 2389 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:47:14.015813 kubelet[2389]: I1212 18:47:14.015788 2389 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:47:14.020574 kubelet[2389]: I1212 18:47:14.020328 2389 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:47:14.020574 kubelet[2389]: I1212 18:47:14.020543 2389 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:47:14.020892 kubelet[2389]: I1212 18:47:14.020861 2389 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:47:14.020892 kubelet[2389]: I1212 18:47:14.020884 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:47:14.033532 kubelet[2389]: I1212 18:47:14.033482 2389 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:47:14.034104 kubelet[2389]: I1212 18:47:14.034076 2389 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:47:14.034191 kubelet[2389]: W1212 18:47:14.034168 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:47:14.034336 kubelet[2389]: W1212 18:47:14.034274 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 12 18:47:14.034383 kubelet[2389]: E1212 18:47:14.034356 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:47:14.036050 kubelet[2389]: W1212 18:47:14.036005 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 12 18:47:14.036121 kubelet[2389]: E1212 18:47:14.036059 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:47:14.036960 kubelet[2389]: I1212 18:47:14.036933 2389 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:47:14.037017 kubelet[2389]: I1212 18:47:14.036983 2389 server.go:1287] "Started kubelet" Dec 12 18:47:14.037733 kubelet[2389]: I1212 18:47:14.037675 2389 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:47:14.041229 kubelet[2389]: I1212 18:47:14.041204 2389 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:47:14.041977 kubelet[2389]: I1212 18:47:14.041949 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:47:14.043345 kubelet[2389]: I1212 18:47:14.042680 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:47:14.043345 kubelet[2389]: I1212 18:47:14.043094 2389 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:47:14.043491 kubelet[2389]: I1212 18:47:14.043460 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:47:14.046730 kubelet[2389]: I1212 18:47:14.046103 2389 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:47:14.046730 kubelet[2389]: I1212 18:47:14.046399 2389 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:47:14.046730 kubelet[2389]: I1212 18:47:14.046542 2389 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:47:14.046730 kubelet[2389]: E1212 18:47:14.046608 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 18:47:14.047025 kubelet[2389]: W1212 18:47:14.046975 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 12 18:47:14.047071 kubelet[2389]: E1212 18:47:14.047043 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:47:14.047157 kubelet[2389]: E1212 18:47:14.047117 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Dec 12 18:47:14.049236 kubelet[2389]: E1212 18:47:14.047676 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18808c39da8d51e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 18:47:14.036953577 +0000 UTC m=+2.059464299,LastTimestamp:2025-12-12 18:47:14.036953577 +0000 UTC m=+2.059464299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 18:47:14.050691 kubelet[2389]: I1212 18:47:14.049401 2389 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:47:14.050691 kubelet[2389]: I1212 18:47:14.049509 2389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:47:14.050691 kubelet[2389]: E1212 18:47:14.049931 2389 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:47:14.050820 kubelet[2389]: I1212 18:47:14.050782 2389 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:47:14.074919 kubelet[2389]: I1212 18:47:14.073654 2389 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:47:14.074919 kubelet[2389]: I1212 18:47:14.073678 2389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:47:14.074919 kubelet[2389]: I1212 18:47:14.073705 2389 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:47:14.074919 kubelet[2389]: I1212 18:47:14.074833 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:47:14.076784 kubelet[2389]: I1212 18:47:14.076746 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:47:14.076863 kubelet[2389]: I1212 18:47:14.076795 2389 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:47:14.076863 kubelet[2389]: I1212 18:47:14.076827 2389 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:47:14.076863 kubelet[2389]: I1212 18:47:14.076834 2389 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:47:14.076960 kubelet[2389]: E1212 18:47:14.076887 2389 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:47:14.080223 kubelet[2389]: I1212 18:47:14.079900 2389 policy_none.go:49] "None policy: Start" Dec 12 18:47:14.080223 kubelet[2389]: I1212 18:47:14.079925 2389 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:47:14.080223 kubelet[2389]: I1212 18:47:14.079940 2389 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:47:14.081124 kubelet[2389]: W1212 18:47:14.081053 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 12 18:47:14.081260 kubelet[2389]: E1212 18:47:14.081237 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:47:14.152879 kubelet[2389]: E1212 18:47:14.152801 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 18:47:14.156335 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:47:14.171309 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:47:14.174727 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:47:14.177578 kubelet[2389]: E1212 18:47:14.177536 2389 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 18:47:14.189935 kubelet[2389]: I1212 18:47:14.189883 2389 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:47:14.190175 kubelet[2389]: I1212 18:47:14.190158 2389 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:47:14.190239 kubelet[2389]: I1212 18:47:14.190175 2389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:47:14.190502 kubelet[2389]: I1212 18:47:14.190482 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:47:14.191494 kubelet[2389]: E1212 18:47:14.191450 2389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:47:14.191574 kubelet[2389]: E1212 18:47:14.191531 2389 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 18:47:14.248116 kubelet[2389]: E1212 18:47:14.247975 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Dec 12 18:47:14.291949 kubelet[2389]: I1212 18:47:14.291909 2389 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:47:14.292344 kubelet[2389]: E1212 18:47:14.292301 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 12 18:47:14.388247 systemd[1]: Created slice kubepods-burstable-pod0f6522d0fc18d0f969e2a501af848af8.slice - libcontainer container kubepods-burstable-pod0f6522d0fc18d0f969e2a501af848af8.slice. Dec 12 18:47:14.403790 kubelet[2389]: E1212 18:47:14.403740 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:47:14.406219 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 12 18:47:14.409613 kubelet[2389]: E1212 18:47:14.409544 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:47:14.413073 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 12 18:47:14.415113 kubelet[2389]: E1212 18:47:14.415067 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:47:14.454687 kubelet[2389]: I1212 18:47:14.454611 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f6522d0fc18d0f969e2a501af848af8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f6522d0fc18d0f969e2a501af848af8\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:14.454687 kubelet[2389]: I1212 18:47:14.454687 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f6522d0fc18d0f969e2a501af848af8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0f6522d0fc18d0f969e2a501af848af8\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:14.454894 kubelet[2389]: I1212 18:47:14.454752 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:14.454894 kubelet[2389]: I1212 18:47:14.454778 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:14.454894 kubelet[2389]: I1212 18:47:14.454801 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:14.454894 kubelet[2389]: I1212 18:47:14.454818 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f6522d0fc18d0f969e2a501af848af8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f6522d0fc18d0f969e2a501af848af8\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:14.454894 kubelet[2389]: I1212 18:47:14.454858 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:14.455060 kubelet[2389]: I1212 18:47:14.454883 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:14.455060 kubelet[2389]: I1212 18:47:14.454921 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 18:47:14.494179 kubelet[2389]: I1212 18:47:14.493796 2389 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:47:14.494179 kubelet[2389]: E1212 18:47:14.494150 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 12 18:47:14.648851 kubelet[2389]: E1212 18:47:14.648708 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Dec 12 18:47:14.705250 kubelet[2389]: E1212 18:47:14.705193 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:14.706082 containerd[1579]: time="2025-12-12T18:47:14.706032709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0f6522d0fc18d0f969e2a501af848af8,Namespace:kube-system,Attempt:0,}" Dec 12 18:47:14.710235 kubelet[2389]: E1212 18:47:14.710212 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:14.710656 containerd[1579]: time="2025-12-12T18:47:14.710604339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 12 18:47:14.715866 kubelet[2389]: E1212 18:47:14.715832 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:14.716255 containerd[1579]: time="2025-12-12T18:47:14.716199226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 12 18:47:14.739667 containerd[1579]: time="2025-12-12T18:47:14.739613967Z" level=info msg="connecting to shim edddfbaacff282c33f330b10133eb6ad8e7c7828d15c7447be35cdfa6cd8daf8" address="unix:///run/containerd/s/b5bb924f27e921f35da34e95f6f32fdd160b4f93981d976b4b0fff9a3872bb09" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:14.748696 containerd[1579]: time="2025-12-12T18:47:14.748644442Z" level=info msg="connecting to shim becf4c7812900cd830a42bb376f069eeb2102979325849477b30db624abda525" address="unix:///run/containerd/s/3006f48a9d0ed1dde55784bc3c8ab7dcafd2bb76f61543c5eee82a26c21c2f8d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:14.760852 containerd[1579]: time="2025-12-12T18:47:14.760759799Z" level=info msg="connecting to shim f17a4c668ad60ed1ef3eb6b4ba0e757e8d1ce7a92a4e6fca3400be2ca819066c" address="unix:///run/containerd/s/22a7caf14afedf5de6cf4dd4e0c60031118224044567c6db841dc096fdfda217" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:14.772807 systemd[1]: Started cri-containerd-edddfbaacff282c33f330b10133eb6ad8e7c7828d15c7447be35cdfa6cd8daf8.scope - libcontainer container edddfbaacff282c33f330b10133eb6ad8e7c7828d15c7447be35cdfa6cd8daf8. Dec 12 18:47:14.779245 systemd[1]: Started cri-containerd-becf4c7812900cd830a42bb376f069eeb2102979325849477b30db624abda525.scope - libcontainer container becf4c7812900cd830a42bb376f069eeb2102979325849477b30db624abda525. Dec 12 18:47:14.787812 systemd[1]: Started cri-containerd-f17a4c668ad60ed1ef3eb6b4ba0e757e8d1ce7a92a4e6fca3400be2ca819066c.scope - libcontainer container f17a4c668ad60ed1ef3eb6b4ba0e757e8d1ce7a92a4e6fca3400be2ca819066c. Dec 12 18:47:14.837244 containerd[1579]: time="2025-12-12T18:47:14.837173560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0f6522d0fc18d0f969e2a501af848af8,Namespace:kube-system,Attempt:0,} returns sandbox id \"edddfbaacff282c33f330b10133eb6ad8e7c7828d15c7447be35cdfa6cd8daf8\"" Dec 12 18:47:14.838492 kubelet[2389]: E1212 18:47:14.838448 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:14.842366 containerd[1579]: time="2025-12-12T18:47:14.841973254Z" level=info msg="CreateContainer within sandbox \"edddfbaacff282c33f330b10133eb6ad8e7c7828d15c7447be35cdfa6cd8daf8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:47:14.845488 containerd[1579]: time="2025-12-12T18:47:14.845451033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"becf4c7812900cd830a42bb376f069eeb2102979325849477b30db624abda525\"" Dec 12 18:47:14.846129 kubelet[2389]: E1212 18:47:14.846080 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:14.846784 kubelet[2389]: W1212 18:47:14.846739 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 12 18:47:14.846826 kubelet[2389]: E1212 18:47:14.846795 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:47:14.848939 containerd[1579]: time="2025-12-12T18:47:14.848909705Z" level=info msg="CreateContainer within sandbox \"becf4c7812900cd830a42bb376f069eeb2102979325849477b30db624abda525\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:47:14.850773 containerd[1579]: time="2025-12-12T18:47:14.850734859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"f17a4c668ad60ed1ef3eb6b4ba0e757e8d1ce7a92a4e6fca3400be2ca819066c\"" Dec 12 18:47:14.851325 kubelet[2389]: E1212 18:47:14.851290 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:14.852713 containerd[1579]: time="2025-12-12T18:47:14.852684790Z" level=info msg="CreateContainer within sandbox \"f17a4c668ad60ed1ef3eb6b4ba0e757e8d1ce7a92a4e6fca3400be2ca819066c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:47:14.861574 containerd[1579]: time="2025-12-12T18:47:14.861509003Z" level=info msg="Container 916020e4c4682b45f2520b967b8c9061675bfab22dd56f3920fa19a9e66080c8: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:14.865348 containerd[1579]: time="2025-12-12T18:47:14.865293846Z" level=info msg="Container 51dac74f22b44d1cad0375cbd62189b9b1a8a7d0f3ca504954f76778afc7db89: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:14.870237 containerd[1579]: time="2025-12-12T18:47:14.870201826Z" level=info msg="CreateContainer within sandbox \"edddfbaacff282c33f330b10133eb6ad8e7c7828d15c7447be35cdfa6cd8daf8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"916020e4c4682b45f2520b967b8c9061675bfab22dd56f3920fa19a9e66080c8\"" Dec 12 18:47:14.870935 containerd[1579]: time="2025-12-12T18:47:14.870901978Z" level=info msg="StartContainer for \"916020e4c4682b45f2520b967b8c9061675bfab22dd56f3920fa19a9e66080c8\"" Dec 12 18:47:14.872270 containerd[1579]: time="2025-12-12T18:47:14.872244092Z" level=info msg="connecting to shim 916020e4c4682b45f2520b967b8c9061675bfab22dd56f3920fa19a9e66080c8" address="unix:///run/containerd/s/b5bb924f27e921f35da34e95f6f32fdd160b4f93981d976b4b0fff9a3872bb09" protocol=ttrpc version=3 Dec 12 18:47:14.874792 containerd[1579]: time="2025-12-12T18:47:14.874758838Z" level=info msg="Container 4f94c7bcb4741bf0f0704226768eccc0d426d9475e8c2b20ea27b19f038619d7: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:14.878772 containerd[1579]: time="2025-12-12T18:47:14.878732581Z" level=info msg="CreateContainer within sandbox \"becf4c7812900cd830a42bb376f069eeb2102979325849477b30db624abda525\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"51dac74f22b44d1cad0375cbd62189b9b1a8a7d0f3ca504954f76778afc7db89\"" Dec 12 18:47:14.879518 containerd[1579]: time="2025-12-12T18:47:14.879494991Z" level=info msg="StartContainer for \"51dac74f22b44d1cad0375cbd62189b9b1a8a7d0f3ca504954f76778afc7db89\"" Dec 12 18:47:14.880970 containerd[1579]: time="2025-12-12T18:47:14.880896458Z" level=info msg="connecting to shim 51dac74f22b44d1cad0375cbd62189b9b1a8a7d0f3ca504954f76778afc7db89" address="unix:///run/containerd/s/3006f48a9d0ed1dde55784bc3c8ab7dcafd2bb76f61543c5eee82a26c21c2f8d" protocol=ttrpc version=3 Dec 12 18:47:14.885430 containerd[1579]: time="2025-12-12T18:47:14.885396623Z" level=info msg="CreateContainer within sandbox \"f17a4c668ad60ed1ef3eb6b4ba0e757e8d1ce7a92a4e6fca3400be2ca819066c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4f94c7bcb4741bf0f0704226768eccc0d426d9475e8c2b20ea27b19f038619d7\"" Dec 12 18:47:14.886062 containerd[1579]: time="2025-12-12T18:47:14.886035599Z" level=info msg="StartContainer for \"4f94c7bcb4741bf0f0704226768eccc0d426d9475e8c2b20ea27b19f038619d7\"" Dec 12 18:47:14.887326 containerd[1579]: time="2025-12-12T18:47:14.887304303Z" level=info msg="connecting to shim 4f94c7bcb4741bf0f0704226768eccc0d426d9475e8c2b20ea27b19f038619d7" address="unix:///run/containerd/s/22a7caf14afedf5de6cf4dd4e0c60031118224044567c6db841dc096fdfda217" protocol=ttrpc version=3 Dec 12 18:47:14.893875 systemd[1]: Started cri-containerd-916020e4c4682b45f2520b967b8c9061675bfab22dd56f3920fa19a9e66080c8.scope - libcontainer container 916020e4c4682b45f2520b967b8c9061675bfab22dd56f3920fa19a9e66080c8. Dec 12 18:47:14.896801 kubelet[2389]: I1212 18:47:14.896772 2389 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:47:14.897105 kubelet[2389]: E1212 18:47:14.897078 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Dec 12 18:47:14.901084 kubelet[2389]: W1212 18:47:14.900674 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 12 18:47:14.901084 kubelet[2389]: E1212 18:47:14.900840 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:47:14.906773 systemd[1]: Started cri-containerd-51dac74f22b44d1cad0375cbd62189b9b1a8a7d0f3ca504954f76778afc7db89.scope - libcontainer container 51dac74f22b44d1cad0375cbd62189b9b1a8a7d0f3ca504954f76778afc7db89. Dec 12 18:47:14.910150 systemd[1]: Started cri-containerd-4f94c7bcb4741bf0f0704226768eccc0d426d9475e8c2b20ea27b19f038619d7.scope - libcontainer container 4f94c7bcb4741bf0f0704226768eccc0d426d9475e8c2b20ea27b19f038619d7. Dec 12 18:47:14.994375 kubelet[2389]: W1212 18:47:14.994291 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Dec 12 18:47:14.994375 kubelet[2389]: E1212 18:47:14.994381 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:47:15.020845 containerd[1579]: time="2025-12-12T18:47:15.020760507Z" level=info msg="StartContainer for \"916020e4c4682b45f2520b967b8c9061675bfab22dd56f3920fa19a9e66080c8\" returns successfully" Dec 12 18:47:15.023939 containerd[1579]: time="2025-12-12T18:47:15.023844841Z" level=info msg="StartContainer for \"51dac74f22b44d1cad0375cbd62189b9b1a8a7d0f3ca504954f76778afc7db89\" returns successfully" Dec 12 18:47:15.024431 containerd[1579]: time="2025-12-12T18:47:15.024394697Z" level=info msg="StartContainer for \"4f94c7bcb4741bf0f0704226768eccc0d426d9475e8c2b20ea27b19f038619d7\" returns successfully" Dec 12 18:47:15.087124 kubelet[2389]: E1212 18:47:15.087078 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:47:15.087321 kubelet[2389]: E1212 18:47:15.087297 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:15.089106 kubelet[2389]: E1212 18:47:15.089076 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:47:15.089225 kubelet[2389]: E1212 18:47:15.089204 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:15.092600 kubelet[2389]: E1212 18:47:15.091705 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:47:15.092600 kubelet[2389]: E1212 18:47:15.091833 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:15.699224 kubelet[2389]: I1212 18:47:15.699186 2389 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:47:16.094403 kubelet[2389]: E1212 18:47:16.094066 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:47:16.095264 kubelet[2389]: E1212 18:47:16.095114 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:47:16.095264 kubelet[2389]: E1212 18:47:16.095175 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:16.095264 kubelet[2389]: E1212 18:47:16.095224 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:16.544098 kubelet[2389]: E1212 18:47:16.542752 2389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 18:47:16.566618 kubelet[2389]: I1212 18:47:16.565646 2389 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 18:47:16.566618 kubelet[2389]: E1212 18:47:16.565680 2389 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 12 18:47:16.583013 kubelet[2389]: E1212 18:47:16.582964 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 18:47:16.683503 kubelet[2389]: E1212 18:47:16.683430 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 18:47:16.784237 kubelet[2389]: E1212 18:47:16.784179 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 18:47:16.884942 kubelet[2389]: E1212 18:47:16.884812 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 18:47:16.946972 kubelet[2389]: I1212 18:47:16.946911 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:16.952198 kubelet[2389]: E1212 18:47:16.952141 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:16.952198 kubelet[2389]: I1212 18:47:16.952171 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 18:47:16.953787 kubelet[2389]: E1212 18:47:16.953746 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 18:47:16.953787 kubelet[2389]: I1212 18:47:16.953781 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:16.954936 kubelet[2389]: E1212 18:47:16.954905 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:17.022822 kubelet[2389]: I1212 18:47:17.022754 2389 apiserver.go:52] "Watching apiserver" Dec 12 18:47:17.046916 kubelet[2389]: I1212 18:47:17.046874 2389 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:47:17.201260 kubelet[2389]: I1212 18:47:17.201212 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:17.203426 kubelet[2389]: E1212 18:47:17.203394 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:17.203580 kubelet[2389]: E1212 18:47:17.203553 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:18.456472 systemd[1]: Reload requested from client PID 2665 ('systemctl') (unit session-9.scope)... Dec 12 18:47:18.456489 systemd[1]: Reloading... Dec 12 18:47:18.542629 zram_generator::config[2712]: No configuration found. Dec 12 18:47:18.821731 systemd[1]: Reloading finished in 364 ms. Dec 12 18:47:18.848659 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:47:18.868375 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:47:18.868765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:47:18.868833 systemd[1]: kubelet.service: Consumed 1.508s CPU time, 131.8M memory peak. Dec 12 18:47:18.870971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:47:18.989881 update_engine[1566]: I20251212 18:47:18.989781 1566 update_attempter.cc:509] Updating boot flags... Dec 12 18:47:19.360952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:47:19.365702 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:47:19.413160 kubelet[2765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:47:19.413160 kubelet[2765]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:47:19.413160 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:47:19.413824 kubelet[2765]: I1212 18:47:19.413200 2765 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:47:19.426057 kubelet[2765]: I1212 18:47:19.426005 2765 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:47:19.426057 kubelet[2765]: I1212 18:47:19.426041 2765 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:47:19.426401 kubelet[2765]: I1212 18:47:19.426372 2765 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:47:19.427999 kubelet[2765]: I1212 18:47:19.427978 2765 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 18:47:19.445835 kubelet[2765]: I1212 18:47:19.445790 2765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:47:19.451871 kubelet[2765]: I1212 18:47:19.451833 2765 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:47:19.466929 kubelet[2765]: I1212 18:47:19.465363 2765 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:47:19.467347 kubelet[2765]: I1212 18:47:19.467105 2765 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:47:19.467347 kubelet[2765]: I1212 18:47:19.467144 2765 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:47:19.467347 kubelet[2765]: I1212 18:47:19.467324 2765 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:47:19.467347 kubelet[2765]: I1212 18:47:19.467333 2765 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:47:19.470656 kubelet[2765]: I1212 18:47:19.467387 2765 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:47:19.470656 kubelet[2765]: I1212 18:47:19.467525 2765 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:47:19.470656 kubelet[2765]: I1212 18:47:19.467549 2765 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:47:19.470656 kubelet[2765]: I1212 18:47:19.468967 2765 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:47:19.470656 kubelet[2765]: I1212 18:47:19.468991 2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:47:19.473609 kubelet[2765]: I1212 18:47:19.472730 2765 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:47:19.473609 kubelet[2765]: I1212 18:47:19.473114 2765 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:47:19.473609 kubelet[2765]: I1212 18:47:19.473499 2765 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:47:19.473609 kubelet[2765]: I1212 18:47:19.473523 2765 server.go:1287] "Started kubelet" Dec 12 18:47:19.475551 kubelet[2765]: I1212 18:47:19.475504 2765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:47:19.476495 kubelet[2765]: I1212 18:47:19.476481 2765 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:47:19.476639 kubelet[2765]: I1212 18:47:19.476620 2765 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:47:19.477599 kubelet[2765]: I1212 18:47:19.477570 2765 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:47:19.478341 kubelet[2765]: I1212 18:47:19.478309 2765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:47:19.479404 kubelet[2765]: I1212 18:47:19.478452 2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:47:19.479635 kubelet[2765]: I1212 18:47:19.479622 2765 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:47:19.479967 kubelet[2765]: I1212 18:47:19.479955 2765 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:47:19.480121 kubelet[2765]: I1212 18:47:19.480111 2765 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:47:19.480463 kubelet[2765]: E1212 18:47:19.480436 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 18:47:19.484806 kubelet[2765]: E1212 18:47:19.484221 2765 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:47:19.485348 kubelet[2765]: I1212 18:47:19.485324 2765 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:47:19.485348 kubelet[2765]: I1212 18:47:19.485344 2765 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:47:19.485462 kubelet[2765]: I1212 18:47:19.485434 2765 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:47:19.539919 kubelet[2765]: I1212 18:47:19.539630 2765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:47:19.542428 kubelet[2765]: I1212 18:47:19.542397 2765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:47:19.542628 kubelet[2765]: I1212 18:47:19.542543 2765 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:47:19.543094 kubelet[2765]: I1212 18:47:19.543071 2765 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:47:19.543094 kubelet[2765]: I1212 18:47:19.543088 2765 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:47:19.543152 kubelet[2765]: E1212 18:47:19.543140 2765 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:47:19.621422 kubelet[2765]: I1212 18:47:19.619890 2765 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:47:19.621422 kubelet[2765]: I1212 18:47:19.619914 2765 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:47:19.621422 kubelet[2765]: I1212 18:47:19.619932 2765 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:47:19.621422 kubelet[2765]: I1212 18:47:19.620079 2765 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:47:19.621422 kubelet[2765]: I1212 18:47:19.620089 2765 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:47:19.621422 kubelet[2765]: I1212 18:47:19.620106 2765 policy_none.go:49] "None policy: Start" Dec 12 18:47:19.621422 kubelet[2765]: I1212 18:47:19.620115 2765 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:47:19.621422 kubelet[2765]: I1212 18:47:19.620124 2765 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:47:19.621422 kubelet[2765]: I1212 18:47:19.620216 2765 state_mem.go:75] "Updated machine memory state" Dec 12 18:47:19.643722 kubelet[2765]: E1212 18:47:19.643662 2765 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 18:47:19.649226 kubelet[2765]: I1212 18:47:19.649202 2765 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:47:19.649542 kubelet[2765]: I1212 18:47:19.649528 2765 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:47:19.649671 kubelet[2765]: I1212 18:47:19.649639 2765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:47:19.649958 kubelet[2765]: I1212 18:47:19.649943 2765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:47:19.651316 kubelet[2765]: E1212 18:47:19.650986 2765 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:47:19.768669 kubelet[2765]: I1212 18:47:19.768340 2765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:47:19.776901 kubelet[2765]: I1212 18:47:19.776861 2765 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 18:47:19.777060 kubelet[2765]: I1212 18:47:19.776955 2765 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 18:47:19.845306 kubelet[2765]: I1212 18:47:19.845247 2765 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:19.845468 kubelet[2765]: I1212 18:47:19.845344 2765 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 18:47:19.848618 kubelet[2765]: I1212 18:47:19.845542 2765 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:19.981726 kubelet[2765]: I1212 18:47:19.981477 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f6522d0fc18d0f969e2a501af848af8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f6522d0fc18d0f969e2a501af848af8\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:19.981726 kubelet[2765]: I1212 18:47:19.981544 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f6522d0fc18d0f969e2a501af848af8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0f6522d0fc18d0f969e2a501af848af8\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:19.981726 kubelet[2765]: I1212 18:47:19.981578 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:19.981726 kubelet[2765]: I1212 18:47:19.981633 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:19.981726 kubelet[2765]: I1212 18:47:19.981658 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 18:47:19.981981 kubelet[2765]: I1212 18:47:19.981675 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f6522d0fc18d0f969e2a501af848af8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0f6522d0fc18d0f969e2a501af848af8\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:19.981981 kubelet[2765]: I1212 18:47:19.981691 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:19.982081 kubelet[2765]: I1212 18:47:19.982048 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:19.982081 kubelet[2765]: I1212 18:47:19.982077 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:20.152402 kubelet[2765]: E1212 18:47:20.152328 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:20.157613 kubelet[2765]: E1212 18:47:20.156970 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:20.157613 kubelet[2765]: E1212 18:47:20.157003 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:20.470428 kubelet[2765]: I1212 18:47:20.470365 2765 apiserver.go:52] "Watching apiserver" Dec 12 18:47:20.480952 kubelet[2765]: I1212 18:47:20.480905 2765 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:47:20.574100 kubelet[2765]: I1212 18:47:20.574070 2765 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 18:47:20.574454 kubelet[2765]: I1212 18:47:20.574403 2765 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:20.574814 kubelet[2765]: I1212 18:47:20.574797 2765 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:20.658500 kubelet[2765]: E1212 18:47:20.657578 2765 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 18:47:20.658500 kubelet[2765]: E1212 18:47:20.657816 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:20.659600 kubelet[2765]: E1212 18:47:20.658534 2765 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 18:47:20.659600 kubelet[2765]: E1212 18:47:20.658750 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:20.659600 kubelet[2765]: E1212 18:47:20.658847 2765 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:47:20.659600 kubelet[2765]: E1212 18:47:20.658957 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:20.659600 kubelet[2765]: I1212 18:47:20.658927 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.658906577 podStartE2EDuration="1.658906577s" podCreationTimestamp="2025-12-12 18:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:47:20.657374906 +0000 UTC m=+1.287849234" watchObservedRunningTime="2025-12-12 18:47:20.658906577 +0000 UTC m=+1.289380905" Dec 12 18:47:20.671073 kubelet[2765]: I1212 18:47:20.670993 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.670972772 podStartE2EDuration="1.670972772s" podCreationTimestamp="2025-12-12 18:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:47:20.670798793 +0000 UTC m=+1.301273121" watchObservedRunningTime="2025-12-12 18:47:20.670972772 +0000 UTC m=+1.301447090" Dec 12 18:47:20.712766 kubelet[2765]: I1212 18:47:20.712516 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.712494597 podStartE2EDuration="1.712494597s" podCreationTimestamp="2025-12-12 18:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:47:20.697939458 +0000 UTC m=+1.328413797" watchObservedRunningTime="2025-12-12 18:47:20.712494597 +0000 UTC m=+1.342968925" Dec 12 18:47:21.575084 kubelet[2765]: E1212 18:47:21.575054 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:21.575623 kubelet[2765]: E1212 18:47:21.575465 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:21.575789 kubelet[2765]: E1212 18:47:21.575719 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:22.577343 kubelet[2765]: E1212 18:47:22.577301 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:24.881255 kubelet[2765]: E1212 18:47:24.881180 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:25.167179 kubelet[2765]: I1212 18:47:25.167146 2765 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:47:25.167510 containerd[1579]: time="2025-12-12T18:47:25.167466283Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:47:25.167925 kubelet[2765]: I1212 18:47:25.167747 2765 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:47:25.547510 systemd[1]: Created slice kubepods-besteffort-podbf78418a_75a5_468b_9c7d_8208c34a12aa.slice - libcontainer container kubepods-besteffort-podbf78418a_75a5_468b_9c7d_8208c34a12aa.slice. Dec 12 18:47:25.593634 kubelet[2765]: E1212 18:47:25.593239 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:25.622776 kubelet[2765]: I1212 18:47:25.622687 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf78418a-75a5-468b-9c7d-8208c34a12aa-kube-proxy\") pod \"kube-proxy-vm85d\" (UID: \"bf78418a-75a5-468b-9c7d-8208c34a12aa\") " pod="kube-system/kube-proxy-vm85d" Dec 12 18:47:25.622776 kubelet[2765]: I1212 18:47:25.622745 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf78418a-75a5-468b-9c7d-8208c34a12aa-xtables-lock\") pod \"kube-proxy-vm85d\" (UID: \"bf78418a-75a5-468b-9c7d-8208c34a12aa\") " pod="kube-system/kube-proxy-vm85d" Dec 12 18:47:25.622776 kubelet[2765]: I1212 18:47:25.622775 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhs4g\" (UniqueName: \"kubernetes.io/projected/bf78418a-75a5-468b-9c7d-8208c34a12aa-kube-api-access-hhs4g\") pod \"kube-proxy-vm85d\" (UID: \"bf78418a-75a5-468b-9c7d-8208c34a12aa\") " pod="kube-system/kube-proxy-vm85d" Dec 12 18:47:25.623060 kubelet[2765]: I1212 18:47:25.622801 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf78418a-75a5-468b-9c7d-8208c34a12aa-lib-modules\") pod \"kube-proxy-vm85d\" (UID: \"bf78418a-75a5-468b-9c7d-8208c34a12aa\") " pod="kube-system/kube-proxy-vm85d" Dec 12 18:47:25.859775 kubelet[2765]: E1212 18:47:25.858335 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:25.863701 containerd[1579]: time="2025-12-12T18:47:25.863325395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vm85d,Uid:bf78418a-75a5-468b-9c7d-8208c34a12aa,Namespace:kube-system,Attempt:0,}" Dec 12 18:47:25.949979 containerd[1579]: time="2025-12-12T18:47:25.946423993Z" level=info msg="connecting to shim 4037ceba66ccd50af04007b7da92a00270fa6ddc3da528f2bb3dd151ee78daeb" address="unix:///run/containerd/s/6831e0bf48d2fc6d6eb0bbce683b435473283959e92f72af4facf7e4731aab5c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:26.043335 systemd[1]: Started cri-containerd-4037ceba66ccd50af04007b7da92a00270fa6ddc3da528f2bb3dd151ee78daeb.scope - libcontainer container 4037ceba66ccd50af04007b7da92a00270fa6ddc3da528f2bb3dd151ee78daeb. Dec 12 18:47:26.059026 systemd[1]: Created slice kubepods-besteffort-pod09ff3ba8_bd7a_4c4b_b6a0_2acee1533fa3.slice - libcontainer container kubepods-besteffort-pod09ff3ba8_bd7a_4c4b_b6a0_2acee1533fa3.slice. Dec 12 18:47:26.109535 containerd[1579]: time="2025-12-12T18:47:26.109418605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vm85d,Uid:bf78418a-75a5-468b-9c7d-8208c34a12aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"4037ceba66ccd50af04007b7da92a00270fa6ddc3da528f2bb3dd151ee78daeb\"" Dec 12 18:47:26.119278 kubelet[2765]: E1212 18:47:26.117860 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:26.134912 containerd[1579]: time="2025-12-12T18:47:26.134844818Z" level=info msg="CreateContainer within sandbox \"4037ceba66ccd50af04007b7da92a00270fa6ddc3da528f2bb3dd151ee78daeb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:47:26.181100 containerd[1579]: time="2025-12-12T18:47:26.180968998Z" level=info msg="Container e7c5a3c4d8503be4ac5b4d1dba7078758df04841f0feecf74b01a9a274ec878b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:26.202306 containerd[1579]: time="2025-12-12T18:47:26.201298740Z" level=info msg="CreateContainer within sandbox \"4037ceba66ccd50af04007b7da92a00270fa6ddc3da528f2bb3dd151ee78daeb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7c5a3c4d8503be4ac5b4d1dba7078758df04841f0feecf74b01a9a274ec878b\"" Dec 12 18:47:26.202756 containerd[1579]: time="2025-12-12T18:47:26.202728128Z" level=info msg="StartContainer for \"e7c5a3c4d8503be4ac5b4d1dba7078758df04841f0feecf74b01a9a274ec878b\"" Dec 12 18:47:26.205044 containerd[1579]: time="2025-12-12T18:47:26.205001741Z" level=info msg="connecting to shim e7c5a3c4d8503be4ac5b4d1dba7078758df04841f0feecf74b01a9a274ec878b" address="unix:///run/containerd/s/6831e0bf48d2fc6d6eb0bbce683b435473283959e92f72af4facf7e4731aab5c" protocol=ttrpc version=3 Dec 12 18:47:26.230819 kubelet[2765]: I1212 18:47:26.230699 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/09ff3ba8-bd7a-4c4b-b6a0-2acee1533fa3-var-lib-calico\") pod \"tigera-operator-7dcd859c48-slrnd\" (UID: \"09ff3ba8-bd7a-4c4b-b6a0-2acee1533fa3\") " pod="tigera-operator/tigera-operator-7dcd859c48-slrnd" Dec 12 18:47:26.230819 kubelet[2765]: I1212 18:47:26.230754 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjhwm\" (UniqueName: \"kubernetes.io/projected/09ff3ba8-bd7a-4c4b-b6a0-2acee1533fa3-kube-api-access-vjhwm\") pod \"tigera-operator-7dcd859c48-slrnd\" (UID: \"09ff3ba8-bd7a-4c4b-b6a0-2acee1533fa3\") " pod="tigera-operator/tigera-operator-7dcd859c48-slrnd" Dec 12 18:47:26.235195 systemd[1]: Started cri-containerd-e7c5a3c4d8503be4ac5b4d1dba7078758df04841f0feecf74b01a9a274ec878b.scope - libcontainer container e7c5a3c4d8503be4ac5b4d1dba7078758df04841f0feecf74b01a9a274ec878b. Dec 12 18:47:26.481443 containerd[1579]: time="2025-12-12T18:47:26.481293683Z" level=info msg="StartContainer for \"e7c5a3c4d8503be4ac5b4d1dba7078758df04841f0feecf74b01a9a274ec878b\" returns successfully" Dec 12 18:47:26.601858 kubelet[2765]: E1212 18:47:26.597522 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:26.601858 kubelet[2765]: E1212 18:47:26.598126 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:26.666075 containerd[1579]: time="2025-12-12T18:47:26.665613139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-slrnd,Uid:09ff3ba8-bd7a-4c4b-b6a0-2acee1533fa3,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:47:26.964403 kubelet[2765]: I1212 18:47:26.964149 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vm85d" podStartSLOduration=1.9640865189999999 podStartE2EDuration="1.964086519s" podCreationTimestamp="2025-12-12 18:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:47:26.96383808 +0000 UTC m=+7.594312418" watchObservedRunningTime="2025-12-12 18:47:26.964086519 +0000 UTC m=+7.594560867" Dec 12 18:47:27.045606 containerd[1579]: time="2025-12-12T18:47:27.045535655Z" level=info msg="connecting to shim f8f4ada15422224b00e137fa04b0c408f17d551ccf15d29bf8ccff29aa66c10a" address="unix:///run/containerd/s/9a69608cdde24e2d389e90181dd2ba1be19c582378ba1a4b4dc886cf71ce92b9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:27.106942 systemd[1]: Started cri-containerd-f8f4ada15422224b00e137fa04b0c408f17d551ccf15d29bf8ccff29aa66c10a.scope - libcontainer container f8f4ada15422224b00e137fa04b0c408f17d551ccf15d29bf8ccff29aa66c10a. Dec 12 18:47:27.216767 containerd[1579]: time="2025-12-12T18:47:27.214894248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-slrnd,Uid:09ff3ba8-bd7a-4c4b-b6a0-2acee1533fa3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f8f4ada15422224b00e137fa04b0c408f17d551ccf15d29bf8ccff29aa66c10a\"" Dec 12 18:47:27.217623 containerd[1579]: time="2025-12-12T18:47:27.217478185Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:47:29.382172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011310785.mount: Deactivated successfully. Dec 12 18:47:29.388694 kubelet[2765]: E1212 18:47:29.388660 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:29.614223 kubelet[2765]: E1212 18:47:29.614149 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:30.617150 kubelet[2765]: E1212 18:47:30.617116 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:31.282350 kubelet[2765]: E1212 18:47:31.282302 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:32.041969 containerd[1579]: time="2025-12-12T18:47:32.041670115Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:32.042755 containerd[1579]: time="2025-12-12T18:47:32.042711957Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 18:47:32.044114 containerd[1579]: time="2025-12-12T18:47:32.044070326Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:32.047014 containerd[1579]: time="2025-12-12T18:47:32.046956763Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:32.047641 containerd[1579]: time="2025-12-12T18:47:32.047570159Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.830051098s" Dec 12 18:47:32.047641 containerd[1579]: time="2025-12-12T18:47:32.047627928Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:47:32.050596 containerd[1579]: time="2025-12-12T18:47:32.050559400Z" level=info msg="CreateContainer within sandbox \"f8f4ada15422224b00e137fa04b0c408f17d551ccf15d29bf8ccff29aa66c10a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:47:32.061721 containerd[1579]: time="2025-12-12T18:47:32.061662136Z" level=info msg="Container 75cb2d5fcee8bc76752e640e6d1bc29091b97593556d7b207d477722a3e60505: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:32.236361 containerd[1579]: time="2025-12-12T18:47:32.236307339Z" level=info msg="CreateContainer within sandbox \"f8f4ada15422224b00e137fa04b0c408f17d551ccf15d29bf8ccff29aa66c10a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"75cb2d5fcee8bc76752e640e6d1bc29091b97593556d7b207d477722a3e60505\"" Dec 12 18:47:32.236695 containerd[1579]: time="2025-12-12T18:47:32.236660545Z" level=info msg="StartContainer for \"75cb2d5fcee8bc76752e640e6d1bc29091b97593556d7b207d477722a3e60505\"" Dec 12 18:47:32.237644 containerd[1579]: time="2025-12-12T18:47:32.237615113Z" level=info msg="connecting to shim 75cb2d5fcee8bc76752e640e6d1bc29091b97593556d7b207d477722a3e60505" address="unix:///run/containerd/s/9a69608cdde24e2d389e90181dd2ba1be19c582378ba1a4b4dc886cf71ce92b9" protocol=ttrpc version=3 Dec 12 18:47:32.295793 systemd[1]: Started cri-containerd-75cb2d5fcee8bc76752e640e6d1bc29091b97593556d7b207d477722a3e60505.scope - libcontainer container 75cb2d5fcee8bc76752e640e6d1bc29091b97593556d7b207d477722a3e60505. Dec 12 18:47:32.339150 containerd[1579]: time="2025-12-12T18:47:32.339058120Z" level=info msg="StartContainer for \"75cb2d5fcee8bc76752e640e6d1bc29091b97593556d7b207d477722a3e60505\" returns successfully" Dec 12 18:47:37.626708 sudo[1815]: pam_unix(sudo:session): session closed for user root Dec 12 18:47:37.628742 sshd[1814]: Connection closed by 10.0.0.1 port 53018 Dec 12 18:47:37.629886 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:37.634940 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:53018.service: Deactivated successfully. Dec 12 18:47:37.635252 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:47:37.637555 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:47:37.638188 systemd[1]: session-9.scope: Consumed 5.894s CPU time, 227.2M memory peak. Dec 12 18:47:37.641320 systemd-logind[1562]: Removed session 9. Dec 12 18:47:43.138913 kubelet[2765]: I1212 18:47:43.138830 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-slrnd" podStartSLOduration=13.306768163 podStartE2EDuration="18.138780348s" podCreationTimestamp="2025-12-12 18:47:25 +0000 UTC" firstStartedPulling="2025-12-12 18:47:27.216555163 +0000 UTC m=+7.847029491" lastFinishedPulling="2025-12-12 18:47:32.048567348 +0000 UTC m=+12.679041676" observedRunningTime="2025-12-12 18:47:32.632152117 +0000 UTC m=+13.262626475" watchObservedRunningTime="2025-12-12 18:47:43.138780348 +0000 UTC m=+23.769254676" Dec 12 18:47:43.152534 systemd[1]: Created slice kubepods-besteffort-pod6f51955b_f9fd_4a70_a788_9d2c51bc7af7.slice - libcontainer container kubepods-besteffort-pod6f51955b_f9fd_4a70_a788_9d2c51bc7af7.slice. Dec 12 18:47:43.249738 kubelet[2765]: I1212 18:47:43.249669 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f51955b-f9fd-4a70-a788-9d2c51bc7af7-tigera-ca-bundle\") pod \"calico-typha-784f99b76-sgxrd\" (UID: \"6f51955b-f9fd-4a70-a788-9d2c51bc7af7\") " pod="calico-system/calico-typha-784f99b76-sgxrd" Dec 12 18:47:43.249738 kubelet[2765]: I1212 18:47:43.249738 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc8hk\" (UniqueName: \"kubernetes.io/projected/6f51955b-f9fd-4a70-a788-9d2c51bc7af7-kube-api-access-mc8hk\") pod \"calico-typha-784f99b76-sgxrd\" (UID: \"6f51955b-f9fd-4a70-a788-9d2c51bc7af7\") " pod="calico-system/calico-typha-784f99b76-sgxrd" Dec 12 18:47:43.249959 kubelet[2765]: I1212 18:47:43.249766 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6f51955b-f9fd-4a70-a788-9d2c51bc7af7-typha-certs\") pod \"calico-typha-784f99b76-sgxrd\" (UID: \"6f51955b-f9fd-4a70-a788-9d2c51bc7af7\") " pod="calico-system/calico-typha-784f99b76-sgxrd" Dec 12 18:47:43.364004 systemd[1]: Created slice kubepods-besteffort-pod8a467998_25f7_4811_a9df_718ac746da12.slice - libcontainer container kubepods-besteffort-pod8a467998_25f7_4811_a9df_718ac746da12.slice. Dec 12 18:47:43.450979 kubelet[2765]: I1212 18:47:43.450908 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8a467998-25f7-4811-a9df-718ac746da12-flexvol-driver-host\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.450979 kubelet[2765]: I1212 18:47:43.450959 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmnnj\" (UniqueName: \"kubernetes.io/projected/8a467998-25f7-4811-a9df-718ac746da12-kube-api-access-qmnnj\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.450979 kubelet[2765]: I1212 18:47:43.450984 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8a467998-25f7-4811-a9df-718ac746da12-policysync\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.451207 kubelet[2765]: I1212 18:47:43.451004 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a467998-25f7-4811-a9df-718ac746da12-xtables-lock\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.451207 kubelet[2765]: I1212 18:47:43.451026 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8a467998-25f7-4811-a9df-718ac746da12-cni-bin-dir\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.451207 kubelet[2765]: I1212 18:47:43.451051 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a467998-25f7-4811-a9df-718ac746da12-lib-modules\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.451207 kubelet[2765]: I1212 18:47:43.451068 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8a467998-25f7-4811-a9df-718ac746da12-node-certs\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.451207 kubelet[2765]: I1212 18:47:43.451088 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8a467998-25f7-4811-a9df-718ac746da12-cni-net-dir\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.451377 kubelet[2765]: I1212 18:47:43.451104 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a467998-25f7-4811-a9df-718ac746da12-tigera-ca-bundle\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.451377 kubelet[2765]: I1212 18:47:43.451121 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8a467998-25f7-4811-a9df-718ac746da12-var-run-calico\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.451377 kubelet[2765]: I1212 18:47:43.451150 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8a467998-25f7-4811-a9df-718ac746da12-cni-log-dir\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.451377 kubelet[2765]: I1212 18:47:43.451171 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8a467998-25f7-4811-a9df-718ac746da12-var-lib-calico\") pod \"calico-node-2vv82\" (UID: \"8a467998-25f7-4811-a9df-718ac746da12\") " pod="calico-system/calico-node-2vv82" Dec 12 18:47:43.459264 kubelet[2765]: E1212 18:47:43.459216 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:43.460051 containerd[1579]: time="2025-12-12T18:47:43.459980656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784f99b76-sgxrd,Uid:6f51955b-f9fd-4a70-a788-9d2c51bc7af7,Namespace:calico-system,Attempt:0,}" Dec 12 18:47:43.496448 containerd[1579]: time="2025-12-12T18:47:43.496393938Z" level=info msg="connecting to shim 876bfbe7a2bdeee09f2190499fa3a4d88a730f5bd815ead30c47ea045b7cff6f" address="unix:///run/containerd/s/12a6c5b5c3db261c5c6f515439c05af54215ace98e213b83f4a959c4713cf69a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:43.523952 systemd[1]: Started cri-containerd-876bfbe7a2bdeee09f2190499fa3a4d88a730f5bd815ead30c47ea045b7cff6f.scope - libcontainer container 876bfbe7a2bdeee09f2190499fa3a4d88a730f5bd815ead30c47ea045b7cff6f. Dec 12 18:47:43.553285 kubelet[2765]: E1212 18:47:43.552730 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:47:43.553777 kubelet[2765]: E1212 18:47:43.553757 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.553972 kubelet[2765]: W1212 18:47:43.553860 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.553972 kubelet[2765]: E1212 18:47:43.553905 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.554516 kubelet[2765]: E1212 18:47:43.554485 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.554692 kubelet[2765]: W1212 18:47:43.554670 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.555026 kubelet[2765]: E1212 18:47:43.554990 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.557008 kubelet[2765]: E1212 18:47:43.556979 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.557008 kubelet[2765]: W1212 18:47:43.556992 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.557313 kubelet[2765]: E1212 18:47:43.557281 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.557854 kubelet[2765]: E1212 18:47:43.557771 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.557854 kubelet[2765]: W1212 18:47:43.557781 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.557971 kubelet[2765]: E1212 18:47:43.557954 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.558147 kubelet[2765]: E1212 18:47:43.558063 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.558147 kubelet[2765]: W1212 18:47:43.558074 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.558262 kubelet[2765]: E1212 18:47:43.558248 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.558375 kubelet[2765]: E1212 18:47:43.558353 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.558375 kubelet[2765]: W1212 18:47:43.558362 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.558484 kubelet[2765]: E1212 18:47:43.558473 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.559157 kubelet[2765]: E1212 18:47:43.559131 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.559157 kubelet[2765]: W1212 18:47:43.559142 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.559411 kubelet[2765]: E1212 18:47:43.559375 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.559545 kubelet[2765]: E1212 18:47:43.559516 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.559545 kubelet[2765]: W1212 18:47:43.559529 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.559742 kubelet[2765]: E1212 18:47:43.559727 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.560085 kubelet[2765]: E1212 18:47:43.560055 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.560085 kubelet[2765]: W1212 18:47:43.560068 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.560263 kubelet[2765]: E1212 18:47:43.560248 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.560504 kubelet[2765]: E1212 18:47:43.560476 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.560504 kubelet[2765]: W1212 18:47:43.560489 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.560695 kubelet[2765]: E1212 18:47:43.560681 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.561046 kubelet[2765]: E1212 18:47:43.561021 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.561159 kubelet[2765]: W1212 18:47:43.561125 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.561541 kubelet[2765]: E1212 18:47:43.561524 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.561715 kubelet[2765]: E1212 18:47:43.561687 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.561715 kubelet[2765]: W1212 18:47:43.561700 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.561955 kubelet[2765]: E1212 18:47:43.561868 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.562291 kubelet[2765]: E1212 18:47:43.562243 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.562291 kubelet[2765]: W1212 18:47:43.562267 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.563179 kubelet[2765]: E1212 18:47:43.563154 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.563331 kubelet[2765]: E1212 18:47:43.563310 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.563411 kubelet[2765]: W1212 18:47:43.563396 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.563641 kubelet[2765]: E1212 18:47:43.563618 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.564013 kubelet[2765]: E1212 18:47:43.563903 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.564132 kubelet[2765]: W1212 18:47:43.564091 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.564753 kubelet[2765]: E1212 18:47:43.564728 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.567875 kubelet[2765]: E1212 18:47:43.567819 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.567875 kubelet[2765]: W1212 18:47:43.567840 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.568868 kubelet[2765]: E1212 18:47:43.568833 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.569030 kubelet[2765]: E1212 18:47:43.568941 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.569030 kubelet[2765]: W1212 18:47:43.568957 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.569030 kubelet[2765]: E1212 18:47:43.568988 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.569140 kubelet[2765]: E1212 18:47:43.569133 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.569706 kubelet[2765]: W1212 18:47:43.569142 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.569706 kubelet[2765]: E1212 18:47:43.569350 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.569706 kubelet[2765]: W1212 18:47:43.569359 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.569706 kubelet[2765]: E1212 18:47:43.569560 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.569706 kubelet[2765]: E1212 18:47:43.569671 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.569706 kubelet[2765]: W1212 18:47:43.569681 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.570080 kubelet[2765]: E1212 18:47:43.569980 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.570080 kubelet[2765]: W1212 18:47:43.569990 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.570251 kubelet[2765]: E1212 18:47:43.570125 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.570251 kubelet[2765]: W1212 18:47:43.570132 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.570691 kubelet[2765]: E1212 18:47:43.570311 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.570691 kubelet[2765]: W1212 18:47:43.570318 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.570691 kubelet[2765]: E1212 18:47:43.570473 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.570691 kubelet[2765]: E1212 18:47:43.570505 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.570691 kubelet[2765]: E1212 18:47:43.570537 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.570691 kubelet[2765]: E1212 18:47:43.570565 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.570691 kubelet[2765]: E1212 18:47:43.570608 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.570691 kubelet[2765]: E1212 18:47:43.570677 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.570691 kubelet[2765]: W1212 18:47:43.570692 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.571336 kubelet[2765]: E1212 18:47:43.570727 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.571336 kubelet[2765]: E1212 18:47:43.571239 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.571336 kubelet[2765]: W1212 18:47:43.571250 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.571336 kubelet[2765]: E1212 18:47:43.571284 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.571996 kubelet[2765]: E1212 18:47:43.571656 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.571996 kubelet[2765]: W1212 18:47:43.571668 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.571996 kubelet[2765]: E1212 18:47:43.571707 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.571996 kubelet[2765]: E1212 18:47:43.571955 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.571996 kubelet[2765]: W1212 18:47:43.571966 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.572199 kubelet[2765]: E1212 18:47:43.572003 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.572241 kubelet[2765]: E1212 18:47:43.572217 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.572285 kubelet[2765]: W1212 18:47:43.572241 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.572285 kubelet[2765]: E1212 18:47:43.572268 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.572714 kubelet[2765]: E1212 18:47:43.572448 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.572714 kubelet[2765]: W1212 18:47:43.572462 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.572714 kubelet[2765]: E1212 18:47:43.572489 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.572853 kubelet[2765]: E1212 18:47:43.572737 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.572853 kubelet[2765]: W1212 18:47:43.572747 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.572853 kubelet[2765]: E1212 18:47:43.572782 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.573023 kubelet[2765]: E1212 18:47:43.572965 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.573023 kubelet[2765]: W1212 18:47:43.572975 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.573023 kubelet[2765]: E1212 18:47:43.572999 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.573645 kubelet[2765]: E1212 18:47:43.573183 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.573645 kubelet[2765]: W1212 18:47:43.573196 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.573645 kubelet[2765]: E1212 18:47:43.573285 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.573832 kubelet[2765]: E1212 18:47:43.573813 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.573832 kubelet[2765]: W1212 18:47:43.573829 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.573897 kubelet[2765]: E1212 18:47:43.573840 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.574014 kubelet[2765]: E1212 18:47:43.573997 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.574014 kubelet[2765]: W1212 18:47:43.574007 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.574093 kubelet[2765]: E1212 18:47:43.574019 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.574181 kubelet[2765]: E1212 18:47:43.574164 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.574181 kubelet[2765]: W1212 18:47:43.574177 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.574278 kubelet[2765]: E1212 18:47:43.574190 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.574354 kubelet[2765]: E1212 18:47:43.574339 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.574354 kubelet[2765]: W1212 18:47:43.574349 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.574451 kubelet[2765]: E1212 18:47:43.574357 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.574482 kubelet[2765]: E1212 18:47:43.574478 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.574515 kubelet[2765]: W1212 18:47:43.574484 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.574515 kubelet[2765]: E1212 18:47:43.574493 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.574656 kubelet[2765]: E1212 18:47:43.574641 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.574656 kubelet[2765]: W1212 18:47:43.574651 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.574656 kubelet[2765]: E1212 18:47:43.574659 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.574809 kubelet[2765]: E1212 18:47:43.574794 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.574809 kubelet[2765]: W1212 18:47:43.574804 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.574809 kubelet[2765]: E1212 18:47:43.574811 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.575679 kubelet[2765]: E1212 18:47:43.574944 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.575679 kubelet[2765]: W1212 18:47:43.574951 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.575679 kubelet[2765]: E1212 18:47:43.574957 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.575679 kubelet[2765]: E1212 18:47:43.575079 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.575679 kubelet[2765]: W1212 18:47:43.575086 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.575679 kubelet[2765]: E1212 18:47:43.575093 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.575679 kubelet[2765]: E1212 18:47:43.575218 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.575679 kubelet[2765]: W1212 18:47:43.575233 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.575679 kubelet[2765]: E1212 18:47:43.575240 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.575679 kubelet[2765]: E1212 18:47:43.575370 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.576023 kubelet[2765]: W1212 18:47:43.575376 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.576023 kubelet[2765]: E1212 18:47:43.575383 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.576023 kubelet[2765]: E1212 18:47:43.575568 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.576023 kubelet[2765]: W1212 18:47:43.575576 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.576023 kubelet[2765]: E1212 18:47:43.575611 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.576023 kubelet[2765]: E1212 18:47:43.575768 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.576023 kubelet[2765]: W1212 18:47:43.575774 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.576023 kubelet[2765]: E1212 18:47:43.575782 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.576023 kubelet[2765]: E1212 18:47:43.575915 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.576023 kubelet[2765]: W1212 18:47:43.575922 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.576310 kubelet[2765]: E1212 18:47:43.575929 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.576310 kubelet[2765]: E1212 18:47:43.576072 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.576310 kubelet[2765]: W1212 18:47:43.576079 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.576310 kubelet[2765]: E1212 18:47:43.576086 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.577482 kubelet[2765]: E1212 18:47:43.577458 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.577482 kubelet[2765]: W1212 18:47:43.577473 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.577607 kubelet[2765]: E1212 18:47:43.577484 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.606812 containerd[1579]: time="2025-12-12T18:47:43.606763669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784f99b76-sgxrd,Uid:6f51955b-f9fd-4a70-a788-9d2c51bc7af7,Namespace:calico-system,Attempt:0,} returns sandbox id \"876bfbe7a2bdeee09f2190499fa3a4d88a730f5bd815ead30c47ea045b7cff6f\"" Dec 12 18:47:43.607365 kubelet[2765]: E1212 18:47:43.607344 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:43.608190 containerd[1579]: time="2025-12-12T18:47:43.608155114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:47:43.652601 kubelet[2765]: E1212 18:47:43.652567 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.652601 kubelet[2765]: W1212 18:47:43.652609 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.652818 kubelet[2765]: E1212 18:47:43.652627 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.652818 kubelet[2765]: I1212 18:47:43.652651 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fa2bd70-6779-4823-84fb-43f19b5a18cb-kubelet-dir\") pod \"csi-node-driver-vxcm2\" (UID: \"5fa2bd70-6779-4823-84fb-43f19b5a18cb\") " pod="calico-system/csi-node-driver-vxcm2" Dec 12 18:47:43.652874 kubelet[2765]: E1212 18:47:43.652849 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.652874 kubelet[2765]: W1212 18:47:43.652860 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.652920 kubelet[2765]: E1212 18:47:43.652876 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.652920 kubelet[2765]: I1212 18:47:43.652893 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5fa2bd70-6779-4823-84fb-43f19b5a18cb-socket-dir\") pod \"csi-node-driver-vxcm2\" (UID: \"5fa2bd70-6779-4823-84fb-43f19b5a18cb\") " pod="calico-system/csi-node-driver-vxcm2" Dec 12 18:47:43.653170 kubelet[2765]: E1212 18:47:43.653154 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.653205 kubelet[2765]: W1212 18:47:43.653171 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.653205 kubelet[2765]: E1212 18:47:43.653187 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.653383 kubelet[2765]: E1212 18:47:43.653370 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.653383 kubelet[2765]: W1212 18:47:43.653382 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.653442 kubelet[2765]: E1212 18:47:43.653397 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.653638 kubelet[2765]: E1212 18:47:43.653599 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.653638 kubelet[2765]: W1212 18:47:43.653613 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.653638 kubelet[2765]: E1212 18:47:43.653626 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.653638 kubelet[2765]: I1212 18:47:43.653645 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5fa2bd70-6779-4823-84fb-43f19b5a18cb-registration-dir\") pod \"csi-node-driver-vxcm2\" (UID: \"5fa2bd70-6779-4823-84fb-43f19b5a18cb\") " pod="calico-system/csi-node-driver-vxcm2" Dec 12 18:47:43.653868 kubelet[2765]: E1212 18:47:43.653823 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.653868 kubelet[2765]: W1212 18:47:43.653831 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.653868 kubelet[2765]: E1212 18:47:43.653851 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.653868 kubelet[2765]: I1212 18:47:43.653864 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbld\" (UniqueName: \"kubernetes.io/projected/5fa2bd70-6779-4823-84fb-43f19b5a18cb-kube-api-access-7hbld\") pod \"csi-node-driver-vxcm2\" (UID: \"5fa2bd70-6779-4823-84fb-43f19b5a18cb\") " pod="calico-system/csi-node-driver-vxcm2" Dec 12 18:47:43.654039 kubelet[2765]: E1212 18:47:43.654022 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.654039 kubelet[2765]: W1212 18:47:43.654033 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.654095 kubelet[2765]: E1212 18:47:43.654045 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.654095 kubelet[2765]: I1212 18:47:43.654058 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5fa2bd70-6779-4823-84fb-43f19b5a18cb-varrun\") pod \"csi-node-driver-vxcm2\" (UID: \"5fa2bd70-6779-4823-84fb-43f19b5a18cb\") " pod="calico-system/csi-node-driver-vxcm2" Dec 12 18:47:43.654235 kubelet[2765]: E1212 18:47:43.654208 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.654235 kubelet[2765]: W1212 18:47:43.654218 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.654291 kubelet[2765]: E1212 18:47:43.654262 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.654396 kubelet[2765]: E1212 18:47:43.654380 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.654396 kubelet[2765]: W1212 18:47:43.654390 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.654473 kubelet[2765]: E1212 18:47:43.654419 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.654573 kubelet[2765]: E1212 18:47:43.654552 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.654573 kubelet[2765]: W1212 18:47:43.654565 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.654660 kubelet[2765]: E1212 18:47:43.654579 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.654802 kubelet[2765]: E1212 18:47:43.654779 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.654802 kubelet[2765]: W1212 18:47:43.654793 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.654902 kubelet[2765]: E1212 18:47:43.654809 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.654970 kubelet[2765]: E1212 18:47:43.654952 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.654970 kubelet[2765]: W1212 18:47:43.654962 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.654970 kubelet[2765]: E1212 18:47:43.654970 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.655141 kubelet[2765]: E1212 18:47:43.655123 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.655141 kubelet[2765]: W1212 18:47:43.655134 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.655202 kubelet[2765]: E1212 18:47:43.655143 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.655386 kubelet[2765]: E1212 18:47:43.655364 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.655386 kubelet[2765]: W1212 18:47:43.655379 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.655482 kubelet[2765]: E1212 18:47:43.655465 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.656176 kubelet[2765]: E1212 18:47:43.655972 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.656176 kubelet[2765]: W1212 18:47:43.655983 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.656176 kubelet[2765]: E1212 18:47:43.655993 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.677378 kubelet[2765]: E1212 18:47:43.677345 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:43.677832 containerd[1579]: time="2025-12-12T18:47:43.677794468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2vv82,Uid:8a467998-25f7-4811-a9df-718ac746da12,Namespace:calico-system,Attempt:0,}" Dec 12 18:47:43.708644 containerd[1579]: time="2025-12-12T18:47:43.707503020Z" level=info msg="connecting to shim d548e172e6312dab6a22e14de1823433bae719b20b31ea01bd939fbdde41dbd2" address="unix:///run/containerd/s/0e89c87c35d611f2ca1e6b7837760c34e2da5f90e6827d15f23c333829c117f2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:43.755510 kubelet[2765]: E1212 18:47:43.755445 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.755714 kubelet[2765]: W1212 18:47:43.755544 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.755714 kubelet[2765]: E1212 18:47:43.755618 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.756926 systemd[1]: Started cri-containerd-d548e172e6312dab6a22e14de1823433bae719b20b31ea01bd939fbdde41dbd2.scope - libcontainer container d548e172e6312dab6a22e14de1823433bae719b20b31ea01bd939fbdde41dbd2. Dec 12 18:47:43.757371 kubelet[2765]: E1212 18:47:43.757331 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.757371 kubelet[2765]: W1212 18:47:43.757353 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.757371 kubelet[2765]: E1212 18:47:43.757373 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.758063 kubelet[2765]: E1212 18:47:43.757980 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.758063 kubelet[2765]: W1212 18:47:43.758002 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.758282 kubelet[2765]: E1212 18:47:43.758259 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.758282 kubelet[2765]: W1212 18:47:43.758278 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.758360 kubelet[2765]: E1212 18:47:43.758257 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.758401 kubelet[2765]: E1212 18:47:43.758387 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.758512 kubelet[2765]: E1212 18:47:43.758479 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.758512 kubelet[2765]: W1212 18:47:43.758495 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.758512 kubelet[2765]: E1212 18:47:43.758514 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.758892 kubelet[2765]: E1212 18:47:43.758856 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.758892 kubelet[2765]: W1212 18:47:43.758874 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.758892 kubelet[2765]: E1212 18:47:43.758893 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.760774 kubelet[2765]: E1212 18:47:43.760557 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.760774 kubelet[2765]: W1212 18:47:43.760577 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.760774 kubelet[2765]: E1212 18:47:43.760674 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.761108 kubelet[2765]: E1212 18:47:43.760909 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.761108 kubelet[2765]: W1212 18:47:43.760923 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.761559 kubelet[2765]: E1212 18:47:43.761167 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.761559 kubelet[2765]: E1212 18:47:43.761416 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.761559 kubelet[2765]: W1212 18:47:43.761427 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.761559 kubelet[2765]: E1212 18:47:43.761526 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.762495 kubelet[2765]: E1212 18:47:43.762448 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.762495 kubelet[2765]: W1212 18:47:43.762470 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.762741 kubelet[2765]: E1212 18:47:43.762699 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.764015 kubelet[2765]: E1212 18:47:43.763904 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.764015 kubelet[2765]: W1212 18:47:43.763953 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.764110 kubelet[2765]: E1212 18:47:43.764039 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.764553 kubelet[2765]: E1212 18:47:43.764516 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.764553 kubelet[2765]: W1212 18:47:43.764533 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.764731 kubelet[2765]: E1212 18:47:43.764693 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.768158 kubelet[2765]: E1212 18:47:43.768112 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.768158 kubelet[2765]: W1212 18:47:43.768127 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.770395 kubelet[2765]: E1212 18:47:43.768287 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.770395 kubelet[2765]: E1212 18:47:43.768519 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.770395 kubelet[2765]: W1212 18:47:43.768549 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.770395 kubelet[2765]: E1212 18:47:43.768834 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.770395 kubelet[2765]: E1212 18:47:43.769675 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.770395 kubelet[2765]: W1212 18:47:43.769686 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.770395 kubelet[2765]: E1212 18:47:43.769745 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.770395 kubelet[2765]: E1212 18:47:43.769961 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.770395 kubelet[2765]: W1212 18:47:43.769970 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.770395 kubelet[2765]: E1212 18:47:43.770031 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.771824 kubelet[2765]: E1212 18:47:43.771785 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.771824 kubelet[2765]: W1212 18:47:43.771804 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.772058 kubelet[2765]: E1212 18:47:43.771947 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.772119 kubelet[2765]: E1212 18:47:43.772088 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.772119 kubelet[2765]: W1212 18:47:43.772098 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.772183 kubelet[2765]: E1212 18:47:43.772139 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.776469 kubelet[2765]: E1212 18:47:43.776427 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.776469 kubelet[2765]: W1212 18:47:43.776449 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.776687 kubelet[2765]: E1212 18:47:43.776597 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.776941 kubelet[2765]: E1212 18:47:43.776904 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.776941 kubelet[2765]: W1212 18:47:43.776921 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.777584 kubelet[2765]: E1212 18:47:43.777546 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.777584 kubelet[2765]: W1212 18:47:43.777564 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.777811 kubelet[2765]: E1212 18:47:43.777772 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.777811 kubelet[2765]: E1212 18:47:43.777804 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.778816 kubelet[2765]: E1212 18:47:43.778630 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.778816 kubelet[2765]: W1212 18:47:43.778644 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.778816 kubelet[2765]: E1212 18:47:43.778661 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.779310 kubelet[2765]: E1212 18:47:43.779056 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.779310 kubelet[2765]: W1212 18:47:43.779072 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.779310 kubelet[2765]: E1212 18:47:43.779084 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.782348 kubelet[2765]: E1212 18:47:43.782114 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.782348 kubelet[2765]: W1212 18:47:43.782137 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.782348 kubelet[2765]: E1212 18:47:43.782157 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.782625 kubelet[2765]: E1212 18:47:43.782483 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.782625 kubelet[2765]: W1212 18:47:43.782498 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.782625 kubelet[2765]: E1212 18:47:43.782510 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.799089 kubelet[2765]: E1212 18:47:43.799040 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:43.799089 kubelet[2765]: W1212 18:47:43.799067 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:43.799292 kubelet[2765]: E1212 18:47:43.799254 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:43.810398 containerd[1579]: time="2025-12-12T18:47:43.810355182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2vv82,Uid:8a467998-25f7-4811-a9df-718ac746da12,Namespace:calico-system,Attempt:0,} returns sandbox id \"d548e172e6312dab6a22e14de1823433bae719b20b31ea01bd939fbdde41dbd2\"" Dec 12 18:47:43.811269 kubelet[2765]: E1212 18:47:43.811242 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:45.089453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161992757.mount: Deactivated successfully. Dec 12 18:47:45.545074 kubelet[2765]: E1212 18:47:45.545005 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:47:46.243886 containerd[1579]: time="2025-12-12T18:47:46.243812271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:46.245008 containerd[1579]: time="2025-12-12T18:47:46.244970125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 12 18:47:46.248047 containerd[1579]: time="2025-12-12T18:47:46.248019203Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:46.250639 containerd[1579]: time="2025-12-12T18:47:46.250580826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:46.251278 containerd[1579]: time="2025-12-12T18:47:46.251247457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.643061706s" Dec 12 18:47:46.251397 containerd[1579]: time="2025-12-12T18:47:46.251278998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:47:46.252308 containerd[1579]: time="2025-12-12T18:47:46.252279977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:47:46.261267 containerd[1579]: time="2025-12-12T18:47:46.261162344Z" level=info msg="CreateContainer within sandbox \"876bfbe7a2bdeee09f2190499fa3a4d88a730f5bd815ead30c47ea045b7cff6f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:47:46.272967 containerd[1579]: time="2025-12-12T18:47:46.272916675Z" level=info msg="Container db553c77fa01a5ceeea13f43a923fd2d4964d1d132382ba16ecb0e80c37008f6: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:46.282134 containerd[1579]: time="2025-12-12T18:47:46.282079167Z" level=info msg="CreateContainer within sandbox \"876bfbe7a2bdeee09f2190499fa3a4d88a730f5bd815ead30c47ea045b7cff6f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"db553c77fa01a5ceeea13f43a923fd2d4964d1d132382ba16ecb0e80c37008f6\"" Dec 12 18:47:46.282633 containerd[1579]: time="2025-12-12T18:47:46.282600486Z" level=info msg="StartContainer for \"db553c77fa01a5ceeea13f43a923fd2d4964d1d132382ba16ecb0e80c37008f6\"" Dec 12 18:47:46.283865 containerd[1579]: time="2025-12-12T18:47:46.283832952Z" level=info msg="connecting to shim db553c77fa01a5ceeea13f43a923fd2d4964d1d132382ba16ecb0e80c37008f6" address="unix:///run/containerd/s/12a6c5b5c3db261c5c6f515439c05af54215ace98e213b83f4a959c4713cf69a" protocol=ttrpc version=3 Dec 12 18:47:46.312829 systemd[1]: Started cri-containerd-db553c77fa01a5ceeea13f43a923fd2d4964d1d132382ba16ecb0e80c37008f6.scope - libcontainer container db553c77fa01a5ceeea13f43a923fd2d4964d1d132382ba16ecb0e80c37008f6. Dec 12 18:47:46.383797 containerd[1579]: time="2025-12-12T18:47:46.383747948Z" level=info msg="StartContainer for \"db553c77fa01a5ceeea13f43a923fd2d4964d1d132382ba16ecb0e80c37008f6\" returns successfully" Dec 12 18:47:46.656445 kubelet[2765]: E1212 18:47:46.656174 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:46.695676 kubelet[2765]: E1212 18:47:46.695623 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.696635 kubelet[2765]: W1212 18:47:46.695662 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.696709 kubelet[2765]: E1212 18:47:46.696644 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.696926 kubelet[2765]: E1212 18:47:46.696896 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.696926 kubelet[2765]: W1212 18:47:46.696912 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.696926 kubelet[2765]: E1212 18:47:46.696922 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.697334 kubelet[2765]: E1212 18:47:46.697293 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.697334 kubelet[2765]: W1212 18:47:46.697324 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.697515 kubelet[2765]: E1212 18:47:46.697352 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.698787 kubelet[2765]: E1212 18:47:46.698750 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.698787 kubelet[2765]: W1212 18:47:46.698768 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.698877 kubelet[2765]: E1212 18:47:46.698806 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.699570 kubelet[2765]: E1212 18:47:46.699484 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.699570 kubelet[2765]: W1212 18:47:46.699508 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.699570 kubelet[2765]: E1212 18:47:46.699519 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.700132 kubelet[2765]: E1212 18:47:46.700106 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.700132 kubelet[2765]: W1212 18:47:46.700122 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.700132 kubelet[2765]: E1212 18:47:46.700132 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.700363 kubelet[2765]: E1212 18:47:46.700291 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.700363 kubelet[2765]: W1212 18:47:46.700298 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.700363 kubelet[2765]: E1212 18:47:46.700305 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.700660 kubelet[2765]: E1212 18:47:46.700458 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.700660 kubelet[2765]: W1212 18:47:46.700467 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.700660 kubelet[2765]: E1212 18:47:46.700476 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.700795 kubelet[2765]: E1212 18:47:46.700677 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.700795 kubelet[2765]: W1212 18:47:46.700684 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.700795 kubelet[2765]: E1212 18:47:46.700691 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.700902 kubelet[2765]: E1212 18:47:46.700829 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.700902 kubelet[2765]: W1212 18:47:46.700836 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.700902 kubelet[2765]: E1212 18:47:46.700845 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.701025 kubelet[2765]: E1212 18:47:46.700992 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.701025 kubelet[2765]: W1212 18:47:46.700999 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.701025 kubelet[2765]: E1212 18:47:46.701006 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.701184 kubelet[2765]: E1212 18:47:46.701143 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.701184 kubelet[2765]: W1212 18:47:46.701154 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.701184 kubelet[2765]: E1212 18:47:46.701174 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.701350 kubelet[2765]: E1212 18:47:46.701322 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.701350 kubelet[2765]: W1212 18:47:46.701338 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.701350 kubelet[2765]: E1212 18:47:46.701345 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.701563 kubelet[2765]: E1212 18:47:46.701528 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.701563 kubelet[2765]: W1212 18:47:46.701538 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.701563 kubelet[2765]: E1212 18:47:46.701546 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.701720 kubelet[2765]: E1212 18:47:46.701710 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.701720 kubelet[2765]: W1212 18:47:46.701717 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.701806 kubelet[2765]: E1212 18:47:46.701725 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.791607 kubelet[2765]: E1212 18:47:46.791543 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.791607 kubelet[2765]: W1212 18:47:46.791579 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.791810 kubelet[2765]: E1212 18:47:46.791627 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.793771 kubelet[2765]: E1212 18:47:46.793744 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.793771 kubelet[2765]: W1212 18:47:46.793765 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.793881 kubelet[2765]: E1212 18:47:46.793792 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.794074 kubelet[2765]: E1212 18:47:46.794052 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.794074 kubelet[2765]: W1212 18:47:46.794067 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.794238 kubelet[2765]: E1212 18:47:46.794137 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.794429 kubelet[2765]: E1212 18:47:46.794283 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.794429 kubelet[2765]: W1212 18:47:46.794293 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.794429 kubelet[2765]: E1212 18:47:46.794356 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.794715 kubelet[2765]: E1212 18:47:46.794542 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.794715 kubelet[2765]: W1212 18:47:46.794562 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.794715 kubelet[2765]: E1212 18:47:46.794610 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.794866 kubelet[2765]: E1212 18:47:46.794826 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.794866 kubelet[2765]: W1212 18:47:46.794836 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.794866 kubelet[2765]: E1212 18:47:46.794861 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.795090 kubelet[2765]: E1212 18:47:46.795066 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.795090 kubelet[2765]: W1212 18:47:46.795082 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.796001 kubelet[2765]: E1212 18:47:46.795102 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.796001 kubelet[2765]: E1212 18:47:46.795341 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.796001 kubelet[2765]: W1212 18:47:46.795351 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.796001 kubelet[2765]: E1212 18:47:46.795362 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.796001 kubelet[2765]: E1212 18:47:46.795740 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.796001 kubelet[2765]: W1212 18:47:46.795749 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.796001 kubelet[2765]: E1212 18:47:46.795762 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.796258 kubelet[2765]: E1212 18:47:46.796012 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.796258 kubelet[2765]: W1212 18:47:46.796022 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.796258 kubelet[2765]: E1212 18:47:46.796034 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.796258 kubelet[2765]: E1212 18:47:46.796221 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.796258 kubelet[2765]: W1212 18:47:46.796231 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.796258 kubelet[2765]: E1212 18:47:46.796240 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.796830 kubelet[2765]: E1212 18:47:46.796801 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.796830 kubelet[2765]: W1212 18:47:46.796820 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.799615 kubelet[2765]: E1212 18:47:46.798711 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.799615 kubelet[2765]: E1212 18:47:46.798979 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.799615 kubelet[2765]: W1212 18:47:46.798990 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.799615 kubelet[2765]: E1212 18:47:46.799006 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.801797 kubelet[2765]: E1212 18:47:46.801761 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.801797 kubelet[2765]: W1212 18:47:46.801789 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.802074 kubelet[2765]: E1212 18:47:46.802011 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.802378 kubelet[2765]: E1212 18:47:46.802363 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.802456 kubelet[2765]: W1212 18:47:46.802442 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.802525 kubelet[2765]: E1212 18:47:46.802509 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.804887 kubelet[2765]: E1212 18:47:46.804851 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.804887 kubelet[2765]: W1212 18:47:46.804869 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.806606 kubelet[2765]: E1212 18:47:46.805025 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.806964 kubelet[2765]: E1212 18:47:46.806949 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.807033 kubelet[2765]: W1212 18:47:46.807019 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.807182 kubelet[2765]: E1212 18:47:46.807154 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:46.809692 kubelet[2765]: E1212 18:47:46.809674 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:46.809801 kubelet[2765]: W1212 18:47:46.809785 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:46.809871 kubelet[2765]: E1212 18:47:46.809858 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.544105 kubelet[2765]: E1212 18:47:47.544040 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:47:47.658498 kubelet[2765]: I1212 18:47:47.658445 2765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:47:47.660691 kubelet[2765]: E1212 18:47:47.658878 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:47.709328 kubelet[2765]: E1212 18:47:47.709286 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.709328 kubelet[2765]: W1212 18:47:47.709323 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.709579 kubelet[2765]: E1212 18:47:47.709352 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.709579 kubelet[2765]: E1212 18:47:47.709559 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.709579 kubelet[2765]: W1212 18:47:47.709570 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.709579 kubelet[2765]: E1212 18:47:47.709607 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.710009 kubelet[2765]: E1212 18:47:47.709936 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.710009 kubelet[2765]: W1212 18:47:47.709950 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.710009 kubelet[2765]: E1212 18:47:47.709961 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.710198 kubelet[2765]: E1212 18:47:47.710179 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.710198 kubelet[2765]: W1212 18:47:47.710192 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.710411 kubelet[2765]: E1212 18:47:47.710202 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.710411 kubelet[2765]: E1212 18:47:47.710402 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.710411 kubelet[2765]: W1212 18:47:47.710411 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.710517 kubelet[2765]: E1212 18:47:47.710421 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.710718 kubelet[2765]: E1212 18:47:47.710683 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.710762 kubelet[2765]: W1212 18:47:47.710714 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.710762 kubelet[2765]: E1212 18:47:47.710745 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.711041 kubelet[2765]: E1212 18:47:47.711012 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.711041 kubelet[2765]: W1212 18:47:47.711025 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.711041 kubelet[2765]: E1212 18:47:47.711035 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.711449 kubelet[2765]: E1212 18:47:47.711272 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.711449 kubelet[2765]: W1212 18:47:47.711284 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.711449 kubelet[2765]: E1212 18:47:47.711295 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.711621 kubelet[2765]: E1212 18:47:47.711578 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.711621 kubelet[2765]: W1212 18:47:47.711604 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.711621 kubelet[2765]: E1212 18:47:47.711613 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.711888 kubelet[2765]: E1212 18:47:47.711837 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.711888 kubelet[2765]: W1212 18:47:47.711850 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.711888 kubelet[2765]: E1212 18:47:47.711859 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.712193 kubelet[2765]: E1212 18:47:47.712053 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.712193 kubelet[2765]: W1212 18:47:47.712066 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.712193 kubelet[2765]: E1212 18:47:47.712074 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.712310 kubelet[2765]: E1212 18:47:47.712272 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.712310 kubelet[2765]: W1212 18:47:47.712280 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.712310 kubelet[2765]: E1212 18:47:47.712288 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.712485 kubelet[2765]: E1212 18:47:47.712466 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.712485 kubelet[2765]: W1212 18:47:47.712478 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.712566 kubelet[2765]: E1212 18:47:47.712489 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.712748 kubelet[2765]: E1212 18:47:47.712732 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.712777 kubelet[2765]: W1212 18:47:47.712747 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.712777 kubelet[2765]: E1212 18:47:47.712763 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.712969 kubelet[2765]: E1212 18:47:47.712956 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.712969 kubelet[2765]: W1212 18:47:47.712967 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.713027 kubelet[2765]: E1212 18:47:47.712977 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.792602 containerd[1579]: time="2025-12-12T18:47:47.792530084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:47.793291 containerd[1579]: time="2025-12-12T18:47:47.793262210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 12 18:47:47.794623 containerd[1579]: time="2025-12-12T18:47:47.794463256Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:47.796821 containerd[1579]: time="2025-12-12T18:47:47.796793452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:47.797619 containerd[1579]: time="2025-12-12T18:47:47.797569079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.545257562s" Dec 12 18:47:47.797666 containerd[1579]: time="2025-12-12T18:47:47.797620806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:47:47.799428 containerd[1579]: time="2025-12-12T18:47:47.799387434Z" level=info msg="CreateContainer within sandbox \"d548e172e6312dab6a22e14de1823433bae719b20b31ea01bd939fbdde41dbd2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:47:47.802369 kubelet[2765]: E1212 18:47:47.802113 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.802369 kubelet[2765]: W1212 18:47:47.802172 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.803924 kubelet[2765]: E1212 18:47:47.803876 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.804273 kubelet[2765]: E1212 18:47:47.804256 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.804273 kubelet[2765]: W1212 18:47:47.804273 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.804347 kubelet[2765]: E1212 18:47:47.804300 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.804582 kubelet[2765]: E1212 18:47:47.804564 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.804650 kubelet[2765]: W1212 18:47:47.804578 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.804650 kubelet[2765]: E1212 18:47:47.804636 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.805001 kubelet[2765]: E1212 18:47:47.804981 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.805001 kubelet[2765]: W1212 18:47:47.804999 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.805065 kubelet[2765]: E1212 18:47:47.805020 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.805260 kubelet[2765]: E1212 18:47:47.805243 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.805260 kubelet[2765]: W1212 18:47:47.805256 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.805321 kubelet[2765]: E1212 18:47:47.805285 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.805571 kubelet[2765]: E1212 18:47:47.805553 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.805571 kubelet[2765]: W1212 18:47:47.805568 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.805708 kubelet[2765]: E1212 18:47:47.805613 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.805975 kubelet[2765]: E1212 18:47:47.805897 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.805975 kubelet[2765]: W1212 18:47:47.805923 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.806042 kubelet[2765]: E1212 18:47:47.805974 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.806301 kubelet[2765]: E1212 18:47:47.806264 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.806301 kubelet[2765]: W1212 18:47:47.806276 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.806361 kubelet[2765]: E1212 18:47:47.806300 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.806633 kubelet[2765]: E1212 18:47:47.806596 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.806633 kubelet[2765]: W1212 18:47:47.806608 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.806797 kubelet[2765]: E1212 18:47:47.806713 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.806890 kubelet[2765]: E1212 18:47:47.806868 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.806890 kubelet[2765]: W1212 18:47:47.806886 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.807177 kubelet[2765]: E1212 18:47:47.807085 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.807251 kubelet[2765]: E1212 18:47:47.807224 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.807251 kubelet[2765]: W1212 18:47:47.807246 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.807320 kubelet[2765]: E1212 18:47:47.807287 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.807672 kubelet[2765]: E1212 18:47:47.807533 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.807672 kubelet[2765]: W1212 18:47:47.807551 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.807672 kubelet[2765]: E1212 18:47:47.807564 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.807836 kubelet[2765]: E1212 18:47:47.807817 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.807836 kubelet[2765]: W1212 18:47:47.807830 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.807916 kubelet[2765]: E1212 18:47:47.807845 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.808193 kubelet[2765]: E1212 18:47:47.808170 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.808193 kubelet[2765]: W1212 18:47:47.808186 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.808279 kubelet[2765]: E1212 18:47:47.808205 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.808452 kubelet[2765]: E1212 18:47:47.808428 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.808452 kubelet[2765]: W1212 18:47:47.808443 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.808554 kubelet[2765]: E1212 18:47:47.808462 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.809480 kubelet[2765]: E1212 18:47:47.809439 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.809480 kubelet[2765]: W1212 18:47:47.809454 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.809480 kubelet[2765]: E1212 18:47:47.809468 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.809847 kubelet[2765]: E1212 18:47:47.809827 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.809847 kubelet[2765]: W1212 18:47:47.809845 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.809924 kubelet[2765]: E1212 18:47:47.809866 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.810107 kubelet[2765]: E1212 18:47:47.810086 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:47:47.810107 kubelet[2765]: W1212 18:47:47.810102 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:47:47.810204 kubelet[2765]: E1212 18:47:47.810113 2765 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:47:47.814508 containerd[1579]: time="2025-12-12T18:47:47.814463111Z" level=info msg="Container f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:47.825963 containerd[1579]: time="2025-12-12T18:47:47.825911035Z" level=info msg="CreateContainer within sandbox \"d548e172e6312dab6a22e14de1823433bae719b20b31ea01bd939fbdde41dbd2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3\"" Dec 12 18:47:47.826634 containerd[1579]: time="2025-12-12T18:47:47.826548693Z" level=info msg="StartContainer for \"f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3\"" Dec 12 18:47:47.829256 containerd[1579]: time="2025-12-12T18:47:47.829223587Z" level=info msg="connecting to shim f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3" address="unix:///run/containerd/s/0e89c87c35d611f2ca1e6b7837760c34e2da5f90e6827d15f23c333829c117f2" protocol=ttrpc version=3 Dec 12 18:47:47.855795 systemd[1]: Started cri-containerd-f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3.scope - libcontainer container f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3. Dec 12 18:47:47.960017 systemd[1]: cri-containerd-f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3.scope: Deactivated successfully. Dec 12 18:47:47.967683 containerd[1579]: time="2025-12-12T18:47:47.967469730Z" level=info msg="received container exit event container_id:\"f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3\" id:\"f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3\" pid:3519 exited_at:{seconds:1765565267 nanos:963220849}" Dec 12 18:47:47.969855 containerd[1579]: time="2025-12-12T18:47:47.969721730Z" level=info msg="StartContainer for \"f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3\" returns successfully" Dec 12 18:47:48.001986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f595687e644ef4b99fecd44d4446fe5ef414ce138afb85f69e6795f6b8dd03e3-rootfs.mount: Deactivated successfully. Dec 12 18:47:48.663213 kubelet[2765]: E1212 18:47:48.663110 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:48.665176 containerd[1579]: time="2025-12-12T18:47:48.664819299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:47:48.683979 kubelet[2765]: I1212 18:47:48.683902 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-784f99b76-sgxrd" podStartSLOduration=3.039637001 podStartE2EDuration="5.683876902s" podCreationTimestamp="2025-12-12 18:47:43 +0000 UTC" firstStartedPulling="2025-12-12 18:47:43.607946291 +0000 UTC m=+24.238420619" lastFinishedPulling="2025-12-12 18:47:46.252186162 +0000 UTC m=+26.882660520" observedRunningTime="2025-12-12 18:47:46.67727187 +0000 UTC m=+27.307746218" watchObservedRunningTime="2025-12-12 18:47:48.683876902 +0000 UTC m=+29.314351230" Dec 12 18:47:49.544121 kubelet[2765]: E1212 18:47:49.544052 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:47:51.544409 kubelet[2765]: E1212 18:47:51.544351 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:47:53.544237 kubelet[2765]: E1212 18:47:53.544156 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:47:53.547729 kubelet[2765]: E1212 18:47:53.547702 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:53.744966 containerd[1579]: time="2025-12-12T18:47:53.744885318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:53.745704 containerd[1579]: time="2025-12-12T18:47:53.745652148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:47:53.746945 containerd[1579]: time="2025-12-12T18:47:53.746879641Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:53.749396 containerd[1579]: time="2025-12-12T18:47:53.749334300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:53.750038 containerd[1579]: time="2025-12-12T18:47:53.749976395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.085116269s" Dec 12 18:47:53.750038 containerd[1579]: time="2025-12-12T18:47:53.750030718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:47:53.753363 containerd[1579]: time="2025-12-12T18:47:53.753322647Z" level=info msg="CreateContainer within sandbox \"d548e172e6312dab6a22e14de1823433bae719b20b31ea01bd939fbdde41dbd2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:47:53.767637 containerd[1579]: time="2025-12-12T18:47:53.766765083Z" level=info msg="Container d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:53.778641 containerd[1579]: time="2025-12-12T18:47:53.778580283Z" level=info msg="CreateContainer within sandbox \"d548e172e6312dab6a22e14de1823433bae719b20b31ea01bd939fbdde41dbd2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f\"" Dec 12 18:47:53.779170 containerd[1579]: time="2025-12-12T18:47:53.779125496Z" level=info msg="StartContainer for \"d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f\"" Dec 12 18:47:53.780553 containerd[1579]: time="2025-12-12T18:47:53.780527618Z" level=info msg="connecting to shim d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f" address="unix:///run/containerd/s/0e89c87c35d611f2ca1e6b7837760c34e2da5f90e6827d15f23c333829c117f2" protocol=ttrpc version=3 Dec 12 18:47:53.806782 systemd[1]: Started cri-containerd-d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f.scope - libcontainer container d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f. Dec 12 18:47:53.893649 containerd[1579]: time="2025-12-12T18:47:53.893605706Z" level=info msg="StartContainer for \"d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f\" returns successfully" Dec 12 18:47:54.678981 kubelet[2765]: E1212 18:47:54.678937 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:54.932862 containerd[1579]: time="2025-12-12T18:47:54.932727441Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:47:54.936084 systemd[1]: cri-containerd-d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f.scope: Deactivated successfully. Dec 12 18:47:54.936442 systemd[1]: cri-containerd-d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f.scope: Consumed 679ms CPU time, 178.3M memory peak, 4.2M read from disk, 171.3M written to disk. Dec 12 18:47:54.938744 containerd[1579]: time="2025-12-12T18:47:54.938625052Z" level=info msg="received container exit event container_id:\"d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f\" id:\"d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f\" pid:3580 exited_at:{seconds:1765565274 nanos:938421931}" Dec 12 18:47:54.960370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0f203199c09c8822f473670222044f9caec67d3fdb14ebc5761fc3e0a63695f-rootfs.mount: Deactivated successfully. Dec 12 18:47:55.009200 kubelet[2765]: I1212 18:47:55.009164 2765 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:47:55.140188 systemd[1]: Created slice kubepods-besteffort-pod65ec234c_cc41_4edc_bf98_a4385884a7fa.slice - libcontainer container kubepods-besteffort-pod65ec234c_cc41_4edc_bf98_a4385884a7fa.slice. Dec 12 18:47:55.146205 systemd[1]: Created slice kubepods-burstable-pod58c8cdb2_a8ab_4c32_9739_b6eb5595d4d2.slice - libcontainer container kubepods-burstable-pod58c8cdb2_a8ab_4c32_9739_b6eb5595d4d2.slice. Dec 12 18:47:55.152758 systemd[1]: Created slice kubepods-besteffort-pod50a60128_897b_496b_b3a2_5d063bc81d6b.slice - libcontainer container kubepods-besteffort-pod50a60128_897b_496b_b3a2_5d063bc81d6b.slice. Dec 12 18:47:55.158038 systemd[1]: Created slice kubepods-besteffort-pod25b4857d_c252_4226_a62a_0019d4b3cac2.slice - libcontainer container kubepods-besteffort-pod25b4857d_c252_4226_a62a_0019d4b3cac2.slice. Dec 12 18:47:55.164253 systemd[1]: Created slice kubepods-besteffort-podbbf4b148_f944_43dd_959c_c1fed4f278a2.slice - libcontainer container kubepods-besteffort-podbbf4b148_f944_43dd_959c_c1fed4f278a2.slice. Dec 12 18:47:55.169648 systemd[1]: Created slice kubepods-besteffort-pod13ccfabf_6529_4c53_843d_bc0433af6501.slice - libcontainer container kubepods-besteffort-pod13ccfabf_6529_4c53_843d_bc0433af6501.slice. Dec 12 18:47:55.174178 systemd[1]: Created slice kubepods-burstable-pod654900fe_d2aa_4e5a_ada8_5cb594e19e6a.slice - libcontainer container kubepods-burstable-pod654900fe_d2aa_4e5a_ada8_5cb594e19e6a.slice. Dec 12 18:47:55.254308 kubelet[2765]: I1212 18:47:55.254159 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ec234c-cc41-4edc-bf98-a4385884a7fa-whisker-ca-bundle\") pod \"whisker-84c69b5b7d-t2tzm\" (UID: \"65ec234c-cc41-4edc-bf98-a4385884a7fa\") " pod="calico-system/whisker-84c69b5b7d-t2tzm" Dec 12 18:47:55.255907 kubelet[2765]: I1212 18:47:55.254344 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw2h2\" (UniqueName: \"kubernetes.io/projected/65ec234c-cc41-4edc-bf98-a4385884a7fa-kube-api-access-rw2h2\") pod \"whisker-84c69b5b7d-t2tzm\" (UID: \"65ec234c-cc41-4edc-bf98-a4385884a7fa\") " pod="calico-system/whisker-84c69b5b7d-t2tzm" Dec 12 18:47:55.255907 kubelet[2765]: I1212 18:47:55.254438 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74j97\" (UniqueName: \"kubernetes.io/projected/654900fe-d2aa-4e5a-ada8-5cb594e19e6a-kube-api-access-74j97\") pod \"coredns-668d6bf9bc-4f5zg\" (UID: \"654900fe-d2aa-4e5a-ada8-5cb594e19e6a\") " pod="kube-system/coredns-668d6bf9bc-4f5zg" Dec 12 18:47:55.255907 kubelet[2765]: I1212 18:47:55.254519 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/25b4857d-c252-4226-a62a-0019d4b3cac2-calico-apiserver-certs\") pod \"calico-apiserver-846d75cf95-ph74m\" (UID: \"25b4857d-c252-4226-a62a-0019d4b3cac2\") " pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" Dec 12 18:47:55.255907 kubelet[2765]: I1212 18:47:55.254537 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8rcn\" (UniqueName: \"kubernetes.io/projected/50a60128-897b-496b-b3a2-5d063bc81d6b-kube-api-access-k8rcn\") pod \"calico-apiserver-846d75cf95-jcxmc\" (UID: \"50a60128-897b-496b-b3a2-5d063bc81d6b\") " pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" Dec 12 18:47:55.255907 kubelet[2765]: I1212 18:47:55.254833 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxpf\" (UniqueName: \"kubernetes.io/projected/13ccfabf-6529-4c53-843d-bc0433af6501-kube-api-access-vcxpf\") pod \"goldmane-666569f655-b9hfz\" (UID: \"13ccfabf-6529-4c53-843d-bc0433af6501\") " pod="calico-system/goldmane-666569f655-b9hfz" Dec 12 18:47:55.256239 kubelet[2765]: I1212 18:47:55.254933 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/50a60128-897b-496b-b3a2-5d063bc81d6b-calico-apiserver-certs\") pod \"calico-apiserver-846d75cf95-jcxmc\" (UID: \"50a60128-897b-496b-b3a2-5d063bc81d6b\") " pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" Dec 12 18:47:55.256239 kubelet[2765]: I1212 18:47:55.254998 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fk5d\" (UniqueName: \"kubernetes.io/projected/25b4857d-c252-4226-a62a-0019d4b3cac2-kube-api-access-8fk5d\") pod \"calico-apiserver-846d75cf95-ph74m\" (UID: \"25b4857d-c252-4226-a62a-0019d4b3cac2\") " pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" Dec 12 18:47:55.256239 kubelet[2765]: I1212 18:47:55.255068 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13ccfabf-6529-4c53-843d-bc0433af6501-config\") pod \"goldmane-666569f655-b9hfz\" (UID: \"13ccfabf-6529-4c53-843d-bc0433af6501\") " pod="calico-system/goldmane-666569f655-b9hfz" Dec 12 18:47:55.256239 kubelet[2765]: I1212 18:47:55.255175 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2-config-volume\") pod \"coredns-668d6bf9bc-mln9t\" (UID: \"58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2\") " pod="kube-system/coredns-668d6bf9bc-mln9t" Dec 12 18:47:55.256239 kubelet[2765]: I1212 18:47:55.255214 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ccfabf-6529-4c53-843d-bc0433af6501-goldmane-ca-bundle\") pod \"goldmane-666569f655-b9hfz\" (UID: \"13ccfabf-6529-4c53-843d-bc0433af6501\") " pod="calico-system/goldmane-666569f655-b9hfz" Dec 12 18:47:55.256448 kubelet[2765]: I1212 18:47:55.255234 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r57lm\" (UniqueName: \"kubernetes.io/projected/bbf4b148-f944-43dd-959c-c1fed4f278a2-kube-api-access-r57lm\") pod \"calico-kube-controllers-7cdf74c998-54c4h\" (UID: \"bbf4b148-f944-43dd-959c-c1fed4f278a2\") " pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" Dec 12 18:47:55.256448 kubelet[2765]: I1212 18:47:55.255251 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dsbl\" (UniqueName: \"kubernetes.io/projected/58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2-kube-api-access-5dsbl\") pod \"coredns-668d6bf9bc-mln9t\" (UID: \"58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2\") " pod="kube-system/coredns-668d6bf9bc-mln9t" Dec 12 18:47:55.256448 kubelet[2765]: I1212 18:47:55.255267 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65ec234c-cc41-4edc-bf98-a4385884a7fa-whisker-backend-key-pair\") pod \"whisker-84c69b5b7d-t2tzm\" (UID: \"65ec234c-cc41-4edc-bf98-a4385884a7fa\") " pod="calico-system/whisker-84c69b5b7d-t2tzm" Dec 12 18:47:55.256448 kubelet[2765]: I1212 18:47:55.255282 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/13ccfabf-6529-4c53-843d-bc0433af6501-goldmane-key-pair\") pod \"goldmane-666569f655-b9hfz\" (UID: \"13ccfabf-6529-4c53-843d-bc0433af6501\") " pod="calico-system/goldmane-666569f655-b9hfz" Dec 12 18:47:55.256448 kubelet[2765]: I1212 18:47:55.255297 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbf4b148-f944-43dd-959c-c1fed4f278a2-tigera-ca-bundle\") pod \"calico-kube-controllers-7cdf74c998-54c4h\" (UID: \"bbf4b148-f944-43dd-959c-c1fed4f278a2\") " pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" Dec 12 18:47:55.256637 kubelet[2765]: I1212 18:47:55.255396 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654900fe-d2aa-4e5a-ada8-5cb594e19e6a-config-volume\") pod \"coredns-668d6bf9bc-4f5zg\" (UID: \"654900fe-d2aa-4e5a-ada8-5cb594e19e6a\") " pod="kube-system/coredns-668d6bf9bc-4f5zg" Dec 12 18:47:55.445071 containerd[1579]: time="2025-12-12T18:47:55.444988090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c69b5b7d-t2tzm,Uid:65ec234c-cc41-4edc-bf98-a4385884a7fa,Namespace:calico-system,Attempt:0,}" Dec 12 18:47:55.450775 kubelet[2765]: E1212 18:47:55.450707 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:55.451763 containerd[1579]: time="2025-12-12T18:47:55.451689277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mln9t,Uid:58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2,Namespace:kube-system,Attempt:0,}" Dec 12 18:47:55.455912 containerd[1579]: time="2025-12-12T18:47:55.455854505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d75cf95-jcxmc,Uid:50a60128-897b-496b-b3a2-5d063bc81d6b,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:47:55.462034 containerd[1579]: time="2025-12-12T18:47:55.461986726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d75cf95-ph74m,Uid:25b4857d-c252-4226-a62a-0019d4b3cac2,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:47:55.470814 containerd[1579]: time="2025-12-12T18:47:55.470750555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cdf74c998-54c4h,Uid:bbf4b148-f944-43dd-959c-c1fed4f278a2,Namespace:calico-system,Attempt:0,}" Dec 12 18:47:55.477210 kubelet[2765]: E1212 18:47:55.477166 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:55.479393 containerd[1579]: time="2025-12-12T18:47:55.479328225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4f5zg,Uid:654900fe-d2aa-4e5a-ada8-5cb594e19e6a,Namespace:kube-system,Attempt:0,}" Dec 12 18:47:55.480161 containerd[1579]: time="2025-12-12T18:47:55.480131874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b9hfz,Uid:13ccfabf-6529-4c53-843d-bc0433af6501,Namespace:calico-system,Attempt:0,}" Dec 12 18:47:55.552004 systemd[1]: Created slice kubepods-besteffort-pod5fa2bd70_6779_4823_84fb_43f19b5a18cb.slice - libcontainer container kubepods-besteffort-pod5fa2bd70_6779_4823_84fb_43f19b5a18cb.slice. Dec 12 18:47:55.560086 containerd[1579]: time="2025-12-12T18:47:55.558046639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxcm2,Uid:5fa2bd70-6779-4823-84fb-43f19b5a18cb,Namespace:calico-system,Attempt:0,}" Dec 12 18:47:55.613876 containerd[1579]: time="2025-12-12T18:47:55.613794978Z" level=error msg="Failed to destroy network for sandbox \"f04f9c9b0b1d6dd859927139b96ed60893a123869104579992ba1561b35d86e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.616217 containerd[1579]: time="2025-12-12T18:47:55.616067935Z" level=error msg="Failed to destroy network for sandbox \"28b247ec89d6a58ffa5b009131acfda2ad2d3e88931c172f23deb6ed208144c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.645872 containerd[1579]: time="2025-12-12T18:47:55.630289448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d75cf95-ph74m,Uid:25b4857d-c252-4226-a62a-0019d4b3cac2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b247ec89d6a58ffa5b009131acfda2ad2d3e88931c172f23deb6ed208144c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.662527 kubelet[2765]: E1212 18:47:55.662473 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b247ec89d6a58ffa5b009131acfda2ad2d3e88931c172f23deb6ed208144c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.662527 kubelet[2765]: E1212 18:47:55.662541 2765 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b247ec89d6a58ffa5b009131acfda2ad2d3e88931c172f23deb6ed208144c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" Dec 12 18:47:55.662871 kubelet[2765]: E1212 18:47:55.662566 2765 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b247ec89d6a58ffa5b009131acfda2ad2d3e88931c172f23deb6ed208144c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" Dec 12 18:47:55.662871 kubelet[2765]: E1212 18:47:55.662637 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-846d75cf95-ph74m_calico-apiserver(25b4857d-c252-4226-a62a-0019d4b3cac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-846d75cf95-ph74m_calico-apiserver(25b4857d-c252-4226-a62a-0019d4b3cac2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28b247ec89d6a58ffa5b009131acfda2ad2d3e88931c172f23deb6ed208144c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" podUID="25b4857d-c252-4226-a62a-0019d4b3cac2" Dec 12 18:47:55.683465 kubelet[2765]: E1212 18:47:55.683426 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:47:55.712842 containerd[1579]: time="2025-12-12T18:47:55.630294978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c69b5b7d-t2tzm,Uid:65ec234c-cc41-4edc-bf98-a4385884a7fa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f04f9c9b0b1d6dd859927139b96ed60893a123869104579992ba1561b35d86e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.713054 containerd[1579]: time="2025-12-12T18:47:55.634151237Z" level=error msg="Failed to destroy network for sandbox \"a2e072dee3460a448892320a7f207d2f07c72c18946938b5c84b0e2582b287e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.713096 kubelet[2765]: E1212 18:47:55.713019 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f04f9c9b0b1d6dd859927139b96ed60893a123869104579992ba1561b35d86e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.713096 kubelet[2765]: E1212 18:47:55.713074 2765 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f04f9c9b0b1d6dd859927139b96ed60893a123869104579992ba1561b35d86e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84c69b5b7d-t2tzm" Dec 12 18:47:55.713172 containerd[1579]: time="2025-12-12T18:47:55.685179657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:47:55.713172 containerd[1579]: time="2025-12-12T18:47:55.656783487Z" level=error msg="Failed to destroy network for sandbox \"0da8f791217424674bcb825aa565e9a133e429dc1d75fe7e3361d8b515ed6f1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.713320 kubelet[2765]: E1212 18:47:55.713095 2765 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f04f9c9b0b1d6dd859927139b96ed60893a123869104579992ba1561b35d86e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84c69b5b7d-t2tzm" Dec 12 18:47:55.713320 kubelet[2765]: E1212 18:47:55.713139 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84c69b5b7d-t2tzm_calico-system(65ec234c-cc41-4edc-bf98-a4385884a7fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84c69b5b7d-t2tzm_calico-system(65ec234c-cc41-4edc-bf98-a4385884a7fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f04f9c9b0b1d6dd859927139b96ed60893a123869104579992ba1561b35d86e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84c69b5b7d-t2tzm" podUID="65ec234c-cc41-4edc-bf98-a4385884a7fa" Dec 12 18:47:55.713430 containerd[1579]: time="2025-12-12T18:47:55.646188921Z" level=error msg="Failed to destroy network for sandbox \"5722afd0a0ef0c32a3e98227ed2f318998d4eb097e136e35166f3c1df0f5a84a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.713617 containerd[1579]: time="2025-12-12T18:47:55.664607082Z" level=error msg="Failed to destroy network for sandbox \"dcbb224287923bf3303ba376c10e2f14e4482895c5b2f46fea7a5d055316ec34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.713779 containerd[1579]: time="2025-12-12T18:47:55.661088638Z" level=error msg="Failed to destroy network for sandbox \"29785095b84b5db314f5e293d8e75e4c1bc64f306f55f1a2d221660240174e53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.714040 containerd[1579]: time="2025-12-12T18:47:55.669019134Z" level=error msg="Failed to destroy network for sandbox \"57d7c46227345710c85c01027d411466cd3e2231218b00ca97896d0b496f824a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.716103 containerd[1579]: time="2025-12-12T18:47:55.715941366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mln9t,Uid:58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0da8f791217424674bcb825aa565e9a133e429dc1d75fe7e3361d8b515ed6f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.716384 kubelet[2765]: E1212 18:47:55.716340 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0da8f791217424674bcb825aa565e9a133e429dc1d75fe7e3361d8b515ed6f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.716454 kubelet[2765]: E1212 18:47:55.716419 2765 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0da8f791217424674bcb825aa565e9a133e429dc1d75fe7e3361d8b515ed6f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mln9t" Dec 12 18:47:55.716497 kubelet[2765]: E1212 18:47:55.716452 2765 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0da8f791217424674bcb825aa565e9a133e429dc1d75fe7e3361d8b515ed6f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mln9t" Dec 12 18:47:55.716545 kubelet[2765]: E1212 18:47:55.716513 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mln9t_kube-system(58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mln9t_kube-system(58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0da8f791217424674bcb825aa565e9a133e429dc1d75fe7e3361d8b515ed6f1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mln9t" podUID="58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2" Dec 12 18:47:55.717140 containerd[1579]: time="2025-12-12T18:47:55.717101674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d75cf95-jcxmc,Uid:50a60128-897b-496b-b3a2-5d063bc81d6b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e072dee3460a448892320a7f207d2f07c72c18946938b5c84b0e2582b287e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.717281 kubelet[2765]: E1212 18:47:55.717250 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e072dee3460a448892320a7f207d2f07c72c18946938b5c84b0e2582b287e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.717324 kubelet[2765]: E1212 18:47:55.717292 2765 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e072dee3460a448892320a7f207d2f07c72c18946938b5c84b0e2582b287e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" Dec 12 18:47:55.717324 kubelet[2765]: E1212 18:47:55.717313 2765 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e072dee3460a448892320a7f207d2f07c72c18946938b5c84b0e2582b287e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" Dec 12 18:47:55.717377 kubelet[2765]: E1212 18:47:55.717349 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-846d75cf95-jcxmc_calico-apiserver(50a60128-897b-496b-b3a2-5d063bc81d6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-846d75cf95-jcxmc_calico-apiserver(50a60128-897b-496b-b3a2-5d063bc81d6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2e072dee3460a448892320a7f207d2f07c72c18946938b5c84b0e2582b287e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" podUID="50a60128-897b-496b-b3a2-5d063bc81d6b" Dec 12 18:47:55.718169 containerd[1579]: time="2025-12-12T18:47:55.718126909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4f5zg,Uid:654900fe-d2aa-4e5a-ada8-5cb594e19e6a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29785095b84b5db314f5e293d8e75e4c1bc64f306f55f1a2d221660240174e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.718389 kubelet[2765]: E1212 18:47:55.718321 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29785095b84b5db314f5e293d8e75e4c1bc64f306f55f1a2d221660240174e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.718389 kubelet[2765]: E1212 18:47:55.718377 2765 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29785095b84b5db314f5e293d8e75e4c1bc64f306f55f1a2d221660240174e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4f5zg" Dec 12 18:47:55.718497 kubelet[2765]: E1212 18:47:55.718406 2765 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29785095b84b5db314f5e293d8e75e4c1bc64f306f55f1a2d221660240174e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4f5zg" Dec 12 18:47:55.718497 kubelet[2765]: E1212 18:47:55.718452 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4f5zg_kube-system(654900fe-d2aa-4e5a-ada8-5cb594e19e6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4f5zg_kube-system(654900fe-d2aa-4e5a-ada8-5cb594e19e6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29785095b84b5db314f5e293d8e75e4c1bc64f306f55f1a2d221660240174e53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4f5zg" podUID="654900fe-d2aa-4e5a-ada8-5cb594e19e6a" Dec 12 18:47:55.719290 containerd[1579]: time="2025-12-12T18:47:55.719230109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cdf74c998-54c4h,Uid:bbf4b148-f944-43dd-959c-c1fed4f278a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d7c46227345710c85c01027d411466cd3e2231218b00ca97896d0b496f824a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.719515 kubelet[2765]: E1212 18:47:55.719473 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d7c46227345710c85c01027d411466cd3e2231218b00ca97896d0b496f824a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.719604 kubelet[2765]: E1212 18:47:55.719524 2765 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d7c46227345710c85c01027d411466cd3e2231218b00ca97896d0b496f824a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" Dec 12 18:47:55.719604 kubelet[2765]: E1212 18:47:55.719546 2765 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d7c46227345710c85c01027d411466cd3e2231218b00ca97896d0b496f824a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" Dec 12 18:47:55.719671 kubelet[2765]: E1212 18:47:55.719581 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cdf74c998-54c4h_calico-system(bbf4b148-f944-43dd-959c-c1fed4f278a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cdf74c998-54c4h_calico-system(bbf4b148-f944-43dd-959c-c1fed4f278a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57d7c46227345710c85c01027d411466cd3e2231218b00ca97896d0b496f824a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" podUID="bbf4b148-f944-43dd-959c-c1fed4f278a2" Dec 12 18:47:55.720272 containerd[1579]: time="2025-12-12T18:47:55.720184821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b9hfz,Uid:13ccfabf-6529-4c53-843d-bc0433af6501,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5722afd0a0ef0c32a3e98227ed2f318998d4eb097e136e35166f3c1df0f5a84a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.720456 kubelet[2765]: E1212 18:47:55.720410 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5722afd0a0ef0c32a3e98227ed2f318998d4eb097e136e35166f3c1df0f5a84a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.720496 kubelet[2765]: E1212 18:47:55.720477 2765 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5722afd0a0ef0c32a3e98227ed2f318998d4eb097e136e35166f3c1df0f5a84a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-b9hfz" Dec 12 18:47:55.720523 kubelet[2765]: E1212 18:47:55.720500 2765 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5722afd0a0ef0c32a3e98227ed2f318998d4eb097e136e35166f3c1df0f5a84a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-b9hfz" Dec 12 18:47:55.720573 kubelet[2765]: E1212 18:47:55.720547 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-b9hfz_calico-system(13ccfabf-6529-4c53-843d-bc0433af6501)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-b9hfz_calico-system(13ccfabf-6529-4c53-843d-bc0433af6501)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5722afd0a0ef0c32a3e98227ed2f318998d4eb097e136e35166f3c1df0f5a84a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-b9hfz" podUID="13ccfabf-6529-4c53-843d-bc0433af6501" Dec 12 18:47:55.721579 containerd[1579]: time="2025-12-12T18:47:55.721507363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxcm2,Uid:5fa2bd70-6779-4823-84fb-43f19b5a18cb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcbb224287923bf3303ba376c10e2f14e4482895c5b2f46fea7a5d055316ec34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.721830 kubelet[2765]: E1212 18:47:55.721798 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcbb224287923bf3303ba376c10e2f14e4482895c5b2f46fea7a5d055316ec34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:47:55.721897 kubelet[2765]: E1212 18:47:55.721841 2765 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcbb224287923bf3303ba376c10e2f14e4482895c5b2f46fea7a5d055316ec34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vxcm2" Dec 12 18:47:55.721897 kubelet[2765]: E1212 18:47:55.721863 2765 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcbb224287923bf3303ba376c10e2f14e4482895c5b2f46fea7a5d055316ec34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vxcm2" Dec 12 18:47:55.721987 kubelet[2765]: E1212 18:47:55.721896 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vxcm2_calico-system(5fa2bd70-6779-4823-84fb-43f19b5a18cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vxcm2_calico-system(5fa2bd70-6779-4823-84fb-43f19b5a18cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcbb224287923bf3303ba376c10e2f14e4482895c5b2f46fea7a5d055316ec34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:48:03.350632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177019339.mount: Deactivated successfully. Dec 12 18:48:05.941888 containerd[1579]: time="2025-12-12T18:48:05.941817686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:48:06.284017 containerd[1579]: time="2025-12-12T18:48:06.283848602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:48:06.381516 containerd[1579]: time="2025-12-12T18:48:06.381441895Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:48:06.505426 containerd[1579]: time="2025-12-12T18:48:06.505369510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:48:06.505954 containerd[1579]: time="2025-12-12T18:48:06.505904334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.792820381s" Dec 12 18:48:06.505954 containerd[1579]: time="2025-12-12T18:48:06.505938418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:48:06.516080 containerd[1579]: time="2025-12-12T18:48:06.516034237Z" level=info msg="CreateContainer within sandbox \"d548e172e6312dab6a22e14de1823433bae719b20b31ea01bd939fbdde41dbd2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:48:07.056177 containerd[1579]: time="2025-12-12T18:48:07.056065180Z" level=info msg="Container 9d44a63ec3134362ec8263f54410d06f33d2bc42eb9409c28186b37f31fcb2ff: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:48:07.071810 containerd[1579]: time="2025-12-12T18:48:07.071747379Z" level=info msg="CreateContainer within sandbox \"d548e172e6312dab6a22e14de1823433bae719b20b31ea01bd939fbdde41dbd2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9d44a63ec3134362ec8263f54410d06f33d2bc42eb9409c28186b37f31fcb2ff\"" Dec 12 18:48:07.072367 containerd[1579]: time="2025-12-12T18:48:07.072331615Z" level=info msg="StartContainer for \"9d44a63ec3134362ec8263f54410d06f33d2bc42eb9409c28186b37f31fcb2ff\"" Dec 12 18:48:07.077426 containerd[1579]: time="2025-12-12T18:48:07.077376910Z" level=info msg="connecting to shim 9d44a63ec3134362ec8263f54410d06f33d2bc42eb9409c28186b37f31fcb2ff" address="unix:///run/containerd/s/0e89c87c35d611f2ca1e6b7837760c34e2da5f90e6827d15f23c333829c117f2" protocol=ttrpc version=3 Dec 12 18:48:07.101829 systemd[1]: Started cri-containerd-9d44a63ec3134362ec8263f54410d06f33d2bc42eb9409c28186b37f31fcb2ff.scope - libcontainer container 9d44a63ec3134362ec8263f54410d06f33d2bc42eb9409c28186b37f31fcb2ff. Dec 12 18:48:07.296847 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:48:07.297764 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:48:07.362019 containerd[1579]: time="2025-12-12T18:48:07.361871829Z" level=info msg="StartContainer for \"9d44a63ec3134362ec8263f54410d06f33d2bc42eb9409c28186b37f31fcb2ff\" returns successfully" Dec 12 18:48:07.544750 kubelet[2765]: E1212 18:48:07.544706 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:07.545271 containerd[1579]: time="2025-12-12T18:48:07.544915123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d75cf95-ph74m,Uid:25b4857d-c252-4226-a62a-0019d4b3cac2,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:48:07.545908 containerd[1579]: time="2025-12-12T18:48:07.545856730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mln9t,Uid:58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2,Namespace:kube-system,Attempt:0,}" Dec 12 18:48:07.712800 kubelet[2765]: E1212 18:48:07.712753 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:07.911610 kubelet[2765]: I1212 18:48:07.911135 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2vv82" podStartSLOduration=2.216231475 podStartE2EDuration="24.911103943s" podCreationTimestamp="2025-12-12 18:47:43 +0000 UTC" firstStartedPulling="2025-12-12 18:47:43.811923369 +0000 UTC m=+24.442397697" lastFinishedPulling="2025-12-12 18:48:06.506795836 +0000 UTC m=+47.137270165" observedRunningTime="2025-12-12 18:48:07.907270181 +0000 UTC m=+48.537744519" watchObservedRunningTime="2025-12-12 18:48:07.911103943 +0000 UTC m=+48.541578271" Dec 12 18:48:08.044973 kubelet[2765]: I1212 18:48:08.044840 2765 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw2h2\" (UniqueName: \"kubernetes.io/projected/65ec234c-cc41-4edc-bf98-a4385884a7fa-kube-api-access-rw2h2\") pod \"65ec234c-cc41-4edc-bf98-a4385884a7fa\" (UID: \"65ec234c-cc41-4edc-bf98-a4385884a7fa\") " Dec 12 18:48:08.044973 kubelet[2765]: I1212 18:48:08.044898 2765 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65ec234c-cc41-4edc-bf98-a4385884a7fa-whisker-backend-key-pair\") pod \"65ec234c-cc41-4edc-bf98-a4385884a7fa\" (UID: \"65ec234c-cc41-4edc-bf98-a4385884a7fa\") " Dec 12 18:48:08.044973 kubelet[2765]: I1212 18:48:08.044921 2765 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ec234c-cc41-4edc-bf98-a4385884a7fa-whisker-ca-bundle\") pod \"65ec234c-cc41-4edc-bf98-a4385884a7fa\" (UID: \"65ec234c-cc41-4edc-bf98-a4385884a7fa\") " Dec 12 18:48:08.045706 kubelet[2765]: I1212 18:48:08.045445 2765 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ec234c-cc41-4edc-bf98-a4385884a7fa-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "65ec234c-cc41-4edc-bf98-a4385884a7fa" (UID: "65ec234c-cc41-4edc-bf98-a4385884a7fa"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:48:08.052834 kubelet[2765]: I1212 18:48:08.052768 2765 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ec234c-cc41-4edc-bf98-a4385884a7fa-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "65ec234c-cc41-4edc-bf98-a4385884a7fa" (UID: "65ec234c-cc41-4edc-bf98-a4385884a7fa"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:48:08.053387 kubelet[2765]: I1212 18:48:08.053351 2765 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ec234c-cc41-4edc-bf98-a4385884a7fa-kube-api-access-rw2h2" (OuterVolumeSpecName: "kube-api-access-rw2h2") pod "65ec234c-cc41-4edc-bf98-a4385884a7fa" (UID: "65ec234c-cc41-4edc-bf98-a4385884a7fa"). InnerVolumeSpecName "kube-api-access-rw2h2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:48:08.055383 systemd[1]: var-lib-kubelet-pods-65ec234c\x2dcc41\x2d4edc\x2dbf98\x2da4385884a7fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drw2h2.mount: Deactivated successfully. Dec 12 18:48:08.055519 systemd[1]: var-lib-kubelet-pods-65ec234c\x2dcc41\x2d4edc\x2dbf98\x2da4385884a7fa-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:48:08.145982 kubelet[2765]: I1212 18:48:08.145901 2765 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rw2h2\" (UniqueName: \"kubernetes.io/projected/65ec234c-cc41-4edc-bf98-a4385884a7fa-kube-api-access-rw2h2\") on node \"localhost\" DevicePath \"\"" Dec 12 18:48:08.145982 kubelet[2765]: I1212 18:48:08.145944 2765 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65ec234c-cc41-4edc-bf98-a4385884a7fa-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 18:48:08.145982 kubelet[2765]: I1212 18:48:08.145954 2765 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65ec234c-cc41-4edc-bf98-a4385884a7fa-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 18:48:08.544408 containerd[1579]: time="2025-12-12T18:48:08.544339249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cdf74c998-54c4h,Uid:bbf4b148-f944-43dd-959c-c1fed4f278a2,Namespace:calico-system,Attempt:0,}" Dec 12 18:48:08.544959 containerd[1579]: time="2025-12-12T18:48:08.544359878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxcm2,Uid:5fa2bd70-6779-4823-84fb-43f19b5a18cb,Namespace:calico-system,Attempt:0,}" Dec 12 18:48:08.715618 kubelet[2765]: E1212 18:48:08.715488 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:08.722969 systemd[1]: Removed slice kubepods-besteffort-pod65ec234c_cc41_4edc_bf98_a4385884a7fa.slice - libcontainer container kubepods-besteffort-pod65ec234c_cc41_4edc_bf98_a4385884a7fa.slice. Dec 12 18:48:08.745793 systemd-networkd[1493]: cali528051c6475: Link UP Dec 12 18:48:08.747753 systemd-networkd[1493]: cali528051c6475: Gained carrier Dec 12 18:48:08.823290 containerd[1579]: 2025-12-12 18:48:07.770 [INFO][3948] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:48:08.823290 containerd[1579]: 2025-12-12 18:48:07.921 [INFO][3948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0 calico-apiserver-846d75cf95- calico-apiserver 25b4857d-c252-4226-a62a-0019d4b3cac2 827 0 2025-12-12 18:47:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:846d75cf95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-846d75cf95-ph74m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali528051c6475 [] [] }} ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-ph74m" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--ph74m-" Dec 12 18:48:08.823290 containerd[1579]: 2025-12-12 18:48:07.921 [INFO][3948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-ph74m" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0" Dec 12 18:48:08.823290 containerd[1579]: 2025-12-12 18:48:08.034 [INFO][4001] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" HandleID="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Workload="localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0" Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.034 [INFO][4001] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" HandleID="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Workload="localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000219b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-846d75cf95-ph74m", "timestamp":"2025-12-12 18:48:08.034048505 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.034 [INFO][4001] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.036 [INFO][4001] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.036 [INFO][4001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.278 [INFO][4001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" host="localhost" Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.311 [INFO][4001] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.315 [INFO][4001] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.316 [INFO][4001] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.318 [INFO][4001] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.321 [INFO][4001] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:08.823725 containerd[1579]: 2025-12-12 18:48:08.324 [INFO][4001] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:08.824057 containerd[1579]: 2025-12-12 18:48:08.324 [INFO][4001] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Dec 12 18:48:08.824057 containerd[1579]: 2025-12-12 18:48:08.333 [INFO][4001] ipam/ipam_block_reader_writer.go 231: The block already exists, getting it from data store affinityType="host" host="localhost" subnet=192.168.88.128/26 Dec 12 18:48:08.824057 containerd[1579]: 2025-12-12 18:48:08.337 [INFO][4001] ipam/ipam_block_reader_writer.go 247: Block is already claimed by this host, confirm the affinity affinityType="host" host="localhost" subnet=192.168.88.128/26 Dec 12 18:48:08.824057 containerd[1579]: 2025-12-12 18:48:08.337 [INFO][4001] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Dec 12 18:48:08.824057 containerd[1579]: 2025-12-12 18:48:08.341 [ERROR][4001] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(localhost-192-168-88-128-26) Name="localhost-192-168-88-128-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"confirmed", Node:"localhost", Type:"host", CIDR:"192.168.88.128/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "localhost-192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Dec 12 18:48:08.824057 containerd[1579]: 2025-12-12 18:48:08.343 [INFO][4001] ipam/ipam_block_reader_writer.go 292: Affinity is already confirmed host="localhost" subnet=192.168.88.128/26 Dec 12 18:48:08.824289 containerd[1579]: 2025-12-12 18:48:08.343 [INFO][4001] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" host="localhost" Dec 12 18:48:08.824289 containerd[1579]: 2025-12-12 18:48:08.345 [INFO][4001] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7 Dec 12 18:48:08.824289 containerd[1579]: 2025-12-12 18:48:08.353 [INFO][4001] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" host="localhost" Dec 12 18:48:08.824391 containerd[1579]: 2025-12-12 18:48:08.356 [ERROR][4001] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-88-128-26) Name="192-168-88-128-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.88.128/26", Affinity:(*string)(0xc0005085b0), Allocations:[]*int{(*int)(0xc0005d4f08), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc000219b00), AttrSecondary:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-846d75cf95-ph74m", "timestamp":"2025-12-12 18:48:08.034048505 +0000 UTC"}}}, SequenceNumber:0x18808c467de6ec1e, SequenceNumberForAllocation:map[string]uint64{"0":0x18808c467de6ec1d}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Dec 12 18:48:08.824391 containerd[1579]: 2025-12-12 18:48:08.356 [INFO][4001] ipam/ipam.go 1250: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" host="localhost" Dec 12 18:48:08.824391 containerd[1579]: 2025-12-12 18:48:08.525 [INFO][4001] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" host="localhost" Dec 12 18:48:08.824391 containerd[1579]: 2025-12-12 18:48:08.529 [INFO][4001] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7 Dec 12 18:48:08.824391 containerd[1579]: 2025-12-12 18:48:08.539 [INFO][4001] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" host="localhost" Dec 12 18:48:08.824391 containerd[1579]: 2025-12-12 18:48:08.546 [INFO][4001] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" host="localhost" Dec 12 18:48:08.824391 containerd[1579]: 2025-12-12 18:48:08.546 [INFO][4001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" host="localhost" Dec 12 18:48:08.824391 containerd[1579]: 2025-12-12 18:48:08.546 [INFO][4001] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:48:08.824391 containerd[1579]: 2025-12-12 18:48:08.546 [INFO][4001] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" HandleID="k8s-pod-network.b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Workload="localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0" Dec 12 18:48:08.824391 containerd[1579]: 2025-12-12 18:48:08.552 [INFO][3948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-ph74m" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0", GenerateName:"calico-apiserver-846d75cf95-", Namespace:"calico-apiserver", SelfLink:"", UID:"25b4857d-c252-4226-a62a-0019d4b3cac2", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d75cf95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-846d75cf95-ph74m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali528051c6475", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:08.825064 containerd[1579]: 2025-12-12 18:48:08.552 [INFO][3948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-ph74m" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0" Dec 12 18:48:08.825064 containerd[1579]: 2025-12-12 18:48:08.552 [INFO][3948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali528051c6475 ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-ph74m" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0" Dec 12 18:48:08.825064 containerd[1579]: 2025-12-12 18:48:08.751 [INFO][3948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-ph74m" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0" Dec 12 18:48:08.825064 containerd[1579]: 2025-12-12 18:48:08.761 [INFO][3948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-ph74m" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0", GenerateName:"calico-apiserver-846d75cf95-", Namespace:"calico-apiserver", SelfLink:"", UID:"25b4857d-c252-4226-a62a-0019d4b3cac2", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d75cf95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7", Pod:"calico-apiserver-846d75cf95-ph74m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali528051c6475", MAC:"fe:fc:44:3e:89:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:08.825064 containerd[1579]: 2025-12-12 18:48:08.806 [INFO][3948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-ph74m" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--ph74m-eth0" Dec 12 18:48:08.932056 systemd[1]: Created slice kubepods-besteffort-pod1fb0cd74_e8a3_44e5_8349_27377736245d.slice - libcontainer container kubepods-besteffort-pod1fb0cd74_e8a3_44e5_8349_27377736245d.slice. Dec 12 18:48:08.950573 systemd-networkd[1493]: cali4403503d132: Link UP Dec 12 18:48:08.950906 systemd-networkd[1493]: cali4403503d132: Gained carrier Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:07.872 [INFO][3985] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:07.920 [INFO][3985] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mln9t-eth0 coredns-668d6bf9bc- kube-system 58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2 834 0 2025-12-12 18:47:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mln9t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4403503d132 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mln9t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mln9t-" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:07.921 [INFO][3985] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mln9t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mln9t-eth0" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.033 [INFO][4007] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" HandleID="k8s-pod-network.6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Workload="localhost-k8s-coredns--668d6bf9bc--mln9t-eth0" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.034 [INFO][4007] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" HandleID="k8s-pod-network.6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Workload="localhost-k8s-coredns--668d6bf9bc--mln9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000173b20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mln9t", "timestamp":"2025-12-12 18:48:08.033865732 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.034 [INFO][4007] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.546 [INFO][4007] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.547 [INFO][4007] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.735 [INFO][4007] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" host="localhost" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.785 [INFO][4007] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.816 [INFO][4007] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.821 [INFO][4007] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.838 [INFO][4007] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.840 [INFO][4007] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" host="localhost" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.847 [INFO][4007] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.918 [INFO][4007] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" host="localhost" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.940 [INFO][4007] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" host="localhost" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.941 [INFO][4007] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" host="localhost" Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.941 [INFO][4007] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:48:08.980096 containerd[1579]: 2025-12-12 18:48:08.941 [INFO][4007] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" HandleID="k8s-pod-network.6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Workload="localhost-k8s-coredns--668d6bf9bc--mln9t-eth0" Dec 12 18:48:08.980767 containerd[1579]: 2025-12-12 18:48:08.947 [INFO][3985] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mln9t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mln9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mln9t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mln9t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4403503d132", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:08.980767 containerd[1579]: 2025-12-12 18:48:08.947 [INFO][3985] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mln9t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mln9t-eth0" Dec 12 18:48:08.980767 containerd[1579]: 2025-12-12 18:48:08.947 [INFO][3985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4403503d132 ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mln9t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mln9t-eth0" Dec 12 18:48:08.980767 containerd[1579]: 2025-12-12 18:48:08.952 [INFO][3985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mln9t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mln9t-eth0" Dec 12 18:48:08.980767 containerd[1579]: 2025-12-12 18:48:08.958 [INFO][3985] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mln9t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mln9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mln9t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b", Pod:"coredns-668d6bf9bc-mln9t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4403503d132", MAC:"56:72:12:8a:8c:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:08.980767 containerd[1579]: 2025-12-12 18:48:08.970 [INFO][3985] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mln9t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mln9t-eth0" Dec 12 18:48:09.040440 systemd-networkd[1493]: calic12490a0fa9: Link UP Dec 12 18:48:09.041614 systemd-networkd[1493]: calic12490a0fa9: Gained carrier Dec 12 18:48:09.043410 containerd[1579]: time="2025-12-12T18:48:09.042953739Z" level=info msg="connecting to shim b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7" address="unix:///run/containerd/s/615ff741aa5fae13d76242145d0a4682e5a4cf960551f3891844292a799e8982" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:48:09.046382 containerd[1579]: time="2025-12-12T18:48:09.046354027Z" level=info msg="connecting to shim 6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b" address="unix:///run/containerd/s/865759e999e20b82f9f8f1936e2a7f1180359b7375f6af75f30a4d06f7ccb807" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:48:09.057624 kubelet[2765]: I1212 18:48:09.057265 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1fb0cd74-e8a3-44e5-8349-27377736245d-whisker-backend-key-pair\") pod \"whisker-76c8b888c6-r72lf\" (UID: \"1fb0cd74-e8a3-44e5-8349-27377736245d\") " pod="calico-system/whisker-76c8b888c6-r72lf" Dec 12 18:48:09.057624 kubelet[2765]: I1212 18:48:09.057322 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnh79\" (UniqueName: \"kubernetes.io/projected/1fb0cd74-e8a3-44e5-8349-27377736245d-kube-api-access-rnh79\") pod \"whisker-76c8b888c6-r72lf\" (UID: \"1fb0cd74-e8a3-44e5-8349-27377736245d\") " pod="calico-system/whisker-76c8b888c6-r72lf" Dec 12 18:48:09.057624 kubelet[2765]: I1212 18:48:09.057345 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb0cd74-e8a3-44e5-8349-27377736245d-whisker-ca-bundle\") pod \"whisker-76c8b888c6-r72lf\" (UID: \"1fb0cd74-e8a3-44e5-8349-27377736245d\") " pod="calico-system/whisker-76c8b888c6-r72lf" Dec 12 18:48:09.074088 systemd[1]: Started cri-containerd-b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7.scope - libcontainer container b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7. Dec 12 18:48:09.079205 systemd[1]: Started cri-containerd-6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b.scope - libcontainer container 6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b. Dec 12 18:48:09.090309 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:48:09.093621 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:08.825 [INFO][4061] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:08.921 [INFO][4061] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0 calico-kube-controllers-7cdf74c998- calico-system bbf4b148-f944-43dd-959c-c1fed4f278a2 837 0 2025-12-12 18:47:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cdf74c998 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cdf74c998-54c4h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic12490a0fa9 [] [] }} ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Namespace="calico-system" Pod="calico-kube-controllers-7cdf74c998-54c4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:08.921 [INFO][4061] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Namespace="calico-system" Pod="calico-kube-controllers-7cdf74c998-54c4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:08.979 [INFO][4105] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" HandleID="k8s-pod-network.c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Workload="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:08.981 [INFO][4105] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" HandleID="k8s-pod-network.c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Workload="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cdf74c998-54c4h", "timestamp":"2025-12-12 18:48:08.979534656 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:08.981 [INFO][4105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:08.981 [INFO][4105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:08.981 [INFO][4105] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:08.998 [INFO][4105] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" host="localhost" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.003 [INFO][4105] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.008 [INFO][4105] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.010 [INFO][4105] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.012 [INFO][4105] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.012 [INFO][4105] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" host="localhost" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.014 [INFO][4105] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.024 [INFO][4105] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" host="localhost" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.031 [INFO][4105] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" host="localhost" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.031 [INFO][4105] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" host="localhost" Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.031 [INFO][4105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:48:09.123043 containerd[1579]: 2025-12-12 18:48:09.031 [INFO][4105] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" HandleID="k8s-pod-network.c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Workload="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0" Dec 12 18:48:09.123956 containerd[1579]: 2025-12-12 18:48:09.036 [INFO][4061] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Namespace="calico-system" Pod="calico-kube-controllers-7cdf74c998-54c4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0", GenerateName:"calico-kube-controllers-7cdf74c998-", Namespace:"calico-system", SelfLink:"", UID:"bbf4b148-f944-43dd-959c-c1fed4f278a2", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cdf74c998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cdf74c998-54c4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic12490a0fa9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:09.123956 containerd[1579]: 2025-12-12 18:48:09.037 [INFO][4061] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Namespace="calico-system" Pod="calico-kube-controllers-7cdf74c998-54c4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0" Dec 12 18:48:09.123956 containerd[1579]: 2025-12-12 18:48:09.037 [INFO][4061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic12490a0fa9 ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Namespace="calico-system" Pod="calico-kube-controllers-7cdf74c998-54c4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0" Dec 12 18:48:09.123956 containerd[1579]: 2025-12-12 18:48:09.040 [INFO][4061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Namespace="calico-system" Pod="calico-kube-controllers-7cdf74c998-54c4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0" Dec 12 18:48:09.123956 containerd[1579]: 2025-12-12 18:48:09.041 [INFO][4061] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Namespace="calico-system" Pod="calico-kube-controllers-7cdf74c998-54c4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0", GenerateName:"calico-kube-controllers-7cdf74c998-", Namespace:"calico-system", SelfLink:"", UID:"bbf4b148-f944-43dd-959c-c1fed4f278a2", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cdf74c998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a", Pod:"calico-kube-controllers-7cdf74c998-54c4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic12490a0fa9", MAC:"0e:93:f9:71:21:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:09.123956 containerd[1579]: 2025-12-12 18:48:09.120 [INFO][4061] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" Namespace="calico-system" Pod="calico-kube-controllers-7cdf74c998-54c4h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cdf74c998--54c4h-eth0" Dec 12 18:48:09.150238 containerd[1579]: time="2025-12-12T18:48:09.150192809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mln9t,Uid:58c8cdb2-a8ab-4c32-9739-b6eb5595d4d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b\"" Dec 12 18:48:09.151226 kubelet[2765]: E1212 18:48:09.151144 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:09.155426 containerd[1579]: time="2025-12-12T18:48:09.154176120Z" level=info msg="CreateContainer within sandbox \"6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:48:09.157442 containerd[1579]: time="2025-12-12T18:48:09.157363288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d75cf95-ph74m,Uid:25b4857d-c252-4226-a62a-0019d4b3cac2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b2982f9544778bf2ca9909716cb84fe398149cf367142cb6d935207882abeaf7\"" Dec 12 18:48:09.169966 containerd[1579]: time="2025-12-12T18:48:09.169783586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:48:09.183557 systemd-networkd[1493]: cali87c8c373384: Link UP Dec 12 18:48:09.184997 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:57864.service - OpenSSH per-connection server daemon (10.0.0.1:57864). Dec 12 18:48:09.186615 systemd-networkd[1493]: cali87c8c373384: Gained carrier Dec 12 18:48:09.193306 containerd[1579]: time="2025-12-12T18:48:09.193232031Z" level=info msg="connecting to shim c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a" address="unix:///run/containerd/s/9c3642eb0d5c6fc30eadd93b250809e6e66ac65acbad68f815e7bdefdf979d07" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:48:09.198380 containerd[1579]: time="2025-12-12T18:48:09.198268629Z" level=info msg="Container d728debca3adcf620420b88354f0996c3b9445ee66ca099412d8763fb4bd76e1: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:08.831 [INFO][4072] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:08.863 [INFO][4072] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vxcm2-eth0 csi-node-driver- calico-system 5fa2bd70-6779-4823-84fb-43f19b5a18cb 708 0 2025-12-12 18:47:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vxcm2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali87c8c373384 [] [] }} ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Namespace="calico-system" Pod="csi-node-driver-vxcm2" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxcm2-" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:08.863 [INFO][4072] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Namespace="calico-system" Pod="csi-node-driver-vxcm2" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxcm2-eth0" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:08.988 [INFO][4099] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" HandleID="k8s-pod-network.06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Workload="localhost-k8s-csi--node--driver--vxcm2-eth0" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:08.988 [INFO][4099] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" HandleID="k8s-pod-network.06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Workload="localhost-k8s-csi--node--driver--vxcm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vxcm2", "timestamp":"2025-12-12 18:48:08.988613055 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:08.988 [INFO][4099] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.031 [INFO][4099] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.031 [INFO][4099] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.110 [INFO][4099] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" host="localhost" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.119 [INFO][4099] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.129 [INFO][4099] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.132 [INFO][4099] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.136 [INFO][4099] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.136 [INFO][4099] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" host="localhost" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.140 [INFO][4099] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136 Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.151 [INFO][4099] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" host="localhost" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.160 [INFO][4099] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" host="localhost" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.160 [INFO][4099] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" host="localhost" Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.161 [INFO][4099] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:48:09.207087 containerd[1579]: 2025-12-12 18:48:09.161 [INFO][4099] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" HandleID="k8s-pod-network.06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Workload="localhost-k8s-csi--node--driver--vxcm2-eth0" Dec 12 18:48:09.208401 containerd[1579]: 2025-12-12 18:48:09.168 [INFO][4072] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Namespace="calico-system" Pod="csi-node-driver-vxcm2" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxcm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vxcm2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fa2bd70-6779-4823-84fb-43f19b5a18cb", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vxcm2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87c8c373384", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:09.208401 containerd[1579]: 2025-12-12 18:48:09.170 [INFO][4072] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Namespace="calico-system" Pod="csi-node-driver-vxcm2" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxcm2-eth0" Dec 12 18:48:09.208401 containerd[1579]: 2025-12-12 18:48:09.170 [INFO][4072] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87c8c373384 ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Namespace="calico-system" Pod="csi-node-driver-vxcm2" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxcm2-eth0" Dec 12 18:48:09.208401 containerd[1579]: 2025-12-12 18:48:09.186 [INFO][4072] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Namespace="calico-system" Pod="csi-node-driver-vxcm2" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxcm2-eth0" Dec 12 18:48:09.208401 containerd[1579]: 2025-12-12 18:48:09.187 [INFO][4072] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Namespace="calico-system" Pod="csi-node-driver-vxcm2" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxcm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vxcm2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fa2bd70-6779-4823-84fb-43f19b5a18cb", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136", Pod:"csi-node-driver-vxcm2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87c8c373384", MAC:"52:ee:5d:8a:ec:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:09.208401 containerd[1579]: 2025-12-12 18:48:09.199 [INFO][4072] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" Namespace="calico-system" Pod="csi-node-driver-vxcm2" WorkloadEndpoint="localhost-k8s-csi--node--driver--vxcm2-eth0" Dec 12 18:48:09.220045 containerd[1579]: time="2025-12-12T18:48:09.219991257Z" level=info msg="CreateContainer within sandbox \"6bf06d829d4855bd35458bf8242795fde439548018bad583eea209ea6e89fe2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d728debca3adcf620420b88354f0996c3b9445ee66ca099412d8763fb4bd76e1\"" Dec 12 18:48:09.223667 containerd[1579]: time="2025-12-12T18:48:09.223555663Z" level=info msg="StartContainer for \"d728debca3adcf620420b88354f0996c3b9445ee66ca099412d8763fb4bd76e1\"" Dec 12 18:48:09.224816 containerd[1579]: time="2025-12-12T18:48:09.224781082Z" level=info msg="connecting to shim d728debca3adcf620420b88354f0996c3b9445ee66ca099412d8763fb4bd76e1" address="unix:///run/containerd/s/865759e999e20b82f9f8f1936e2a7f1180359b7375f6af75f30a4d06f7ccb807" protocol=ttrpc version=3 Dec 12 18:48:09.239431 containerd[1579]: time="2025-12-12T18:48:09.239320825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76c8b888c6-r72lf,Uid:1fb0cd74-e8a3-44e5-8349-27377736245d,Namespace:calico-system,Attempt:0,}" Dec 12 18:48:09.244208 systemd[1]: Started cri-containerd-c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a.scope - libcontainer container c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a. Dec 12 18:48:09.257076 containerd[1579]: time="2025-12-12T18:48:09.256998114Z" level=info msg="connecting to shim 06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136" address="unix:///run/containerd/s/5958bc5ca09b772c3ca6784efa3ceb491e6942337202576722b582d310f79fe0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:48:09.270069 systemd[1]: Started cri-containerd-d728debca3adcf620420b88354f0996c3b9445ee66ca099412d8763fb4bd76e1.scope - libcontainer container d728debca3adcf620420b88354f0996c3b9445ee66ca099412d8763fb4bd76e1. Dec 12 18:48:09.277283 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 57864 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:09.279030 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:09.284285 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:48:09.292879 systemd[1]: Started cri-containerd-06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136.scope - libcontainer container 06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136. Dec 12 18:48:09.298701 systemd-logind[1562]: New session 10 of user core. Dec 12 18:48:09.303818 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:48:09.329035 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:48:09.377021 containerd[1579]: time="2025-12-12T18:48:09.376863509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxcm2,Uid:5fa2bd70-6779-4823-84fb-43f19b5a18cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"06cebfe31fb26ce0bf95cdf0d3641a66f505eda88b2691ec4694f9bd58338136\"" Dec 12 18:48:09.378244 containerd[1579]: time="2025-12-12T18:48:09.378055174Z" level=info msg="StartContainer for \"d728debca3adcf620420b88354f0996c3b9445ee66ca099412d8763fb4bd76e1\" returns successfully" Dec 12 18:48:09.379933 containerd[1579]: time="2025-12-12T18:48:09.379903511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cdf74c998-54c4h,Uid:bbf4b148-f944-43dd-959c-c1fed4f278a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8f1a2c0e304b251a22748b936aa1b458e7755d590197d5add4b2a64d436506a\"" Dec 12 18:48:09.475802 systemd-networkd[1493]: calia4f17315649: Link UP Dec 12 18:48:09.478130 systemd-networkd[1493]: calia4f17315649: Gained carrier Dec 12 18:48:09.548578 containerd[1579]: time="2025-12-12T18:48:09.548019113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b9hfz,Uid:13ccfabf-6529-4c53-843d-bc0433af6501,Namespace:calico-system,Attempt:0,}" Dec 12 18:48:09.550544 containerd[1579]: time="2025-12-12T18:48:09.550502782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:09.551662 kubelet[2765]: E1212 18:48:09.551600 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:09.554143 containerd[1579]: time="2025-12-12T18:48:09.554088839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4f5zg,Uid:654900fe-d2aa-4e5a-ada8-5cb594e19e6a,Namespace:kube-system,Attempt:0,}" Dec 12 18:48:09.555991 kubelet[2765]: I1212 18:48:09.555753 2765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ec234c-cc41-4edc-bf98-a4385884a7fa" path="/var/lib/kubelet/pods/65ec234c-cc41-4edc-bf98-a4385884a7fa/volumes" Dec 12 18:48:09.581009 containerd[1579]: time="2025-12-12T18:48:09.580847984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:09.597223 containerd[1579]: time="2025-12-12T18:48:09.597124155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:48:09.598039 kubelet[2765]: E1212 18:48:09.597990 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:09.598506 kubelet[2765]: E1212 18:48:09.598471 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.301 [INFO][4296] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.329 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76c8b888c6--r72lf-eth0 whisker-76c8b888c6- calico-system 1fb0cd74-e8a3-44e5-8349-27377736245d 957 0 2025-12-12 18:48:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76c8b888c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76c8b888c6-r72lf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia4f17315649 [] [] }} ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Namespace="calico-system" Pod="whisker-76c8b888c6-r72lf" WorkloadEndpoint="localhost-k8s-whisker--76c8b888c6--r72lf-" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.329 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Namespace="calico-system" Pod="whisker-76c8b888c6-r72lf" WorkloadEndpoint="localhost-k8s-whisker--76c8b888c6--r72lf-eth0" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.379 [INFO][4341] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" HandleID="k8s-pod-network.da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Workload="localhost-k8s-whisker--76c8b888c6--r72lf-eth0" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.383 [INFO][4341] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" HandleID="k8s-pod-network.da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Workload="localhost-k8s-whisker--76c8b888c6--r72lf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76c8b888c6-r72lf", "timestamp":"2025-12-12 18:48:09.379911746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.383 [INFO][4341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.383 [INFO][4341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.383 [INFO][4341] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.394 [INFO][4341] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" host="localhost" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.408 [INFO][4341] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.418 [INFO][4341] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.422 [INFO][4341] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.426 [INFO][4341] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.426 [INFO][4341] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" host="localhost" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.430 [INFO][4341] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.442 [INFO][4341] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" host="localhost" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.455 [INFO][4341] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" host="localhost" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.455 [INFO][4341] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" host="localhost" Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.456 [INFO][4341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:48:09.603617 containerd[1579]: 2025-12-12 18:48:09.456 [INFO][4341] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" HandleID="k8s-pod-network.da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Workload="localhost-k8s-whisker--76c8b888c6--r72lf-eth0" Dec 12 18:48:09.604471 containerd[1579]: 2025-12-12 18:48:09.467 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Namespace="calico-system" Pod="whisker-76c8b888c6-r72lf" WorkloadEndpoint="localhost-k8s-whisker--76c8b888c6--r72lf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76c8b888c6--r72lf-eth0", GenerateName:"whisker-76c8b888c6-", Namespace:"calico-system", SelfLink:"", UID:"1fb0cd74-e8a3-44e5-8349-27377736245d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76c8b888c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76c8b888c6-r72lf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4f17315649", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:09.604471 containerd[1579]: 2025-12-12 18:48:09.468 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Namespace="calico-system" Pod="whisker-76c8b888c6-r72lf" WorkloadEndpoint="localhost-k8s-whisker--76c8b888c6--r72lf-eth0" Dec 12 18:48:09.604471 containerd[1579]: 2025-12-12 18:48:09.468 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4f17315649 ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Namespace="calico-system" Pod="whisker-76c8b888c6-r72lf" WorkloadEndpoint="localhost-k8s-whisker--76c8b888c6--r72lf-eth0" Dec 12 18:48:09.604471 containerd[1579]: 2025-12-12 18:48:09.481 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Namespace="calico-system" Pod="whisker-76c8b888c6-r72lf" WorkloadEndpoint="localhost-k8s-whisker--76c8b888c6--r72lf-eth0" Dec 12 18:48:09.604471 containerd[1579]: 2025-12-12 18:48:09.482 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Namespace="calico-system" Pod="whisker-76c8b888c6-r72lf" WorkloadEndpoint="localhost-k8s-whisker--76c8b888c6--r72lf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76c8b888c6--r72lf-eth0", GenerateName:"whisker-76c8b888c6-", Namespace:"calico-system", SelfLink:"", UID:"1fb0cd74-e8a3-44e5-8349-27377736245d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76c8b888c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d", Pod:"whisker-76c8b888c6-r72lf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4f17315649", MAC:"a2:ab:f8:22:31:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:09.604471 containerd[1579]: 2025-12-12 18:48:09.580 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" Namespace="calico-system" Pod="whisker-76c8b888c6-r72lf" WorkloadEndpoint="localhost-k8s-whisker--76c8b888c6--r72lf-eth0" Dec 12 18:48:09.604471 containerd[1579]: time="2025-12-12T18:48:09.601337518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:48:09.624895 kubelet[2765]: E1212 18:48:09.624768 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8fk5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-846d75cf95-ph74m_calico-apiserver(25b4857d-c252-4226-a62a-0019d4b3cac2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:09.626348 kubelet[2765]: E1212 18:48:09.626284 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" podUID="25b4857d-c252-4226-a62a-0019d4b3cac2" Dec 12 18:48:09.660833 sshd[4337]: Connection closed by 10.0.0.1 port 57864 Dec 12 18:48:09.662821 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:09.673476 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:57864.service: Deactivated successfully. Dec 12 18:48:09.681478 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:48:09.683968 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:48:09.691133 systemd-logind[1562]: Removed session 10. Dec 12 18:48:09.735475 kubelet[2765]: E1212 18:48:09.734842 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:09.740556 kubelet[2765]: E1212 18:48:09.739546 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" podUID="25b4857d-c252-4226-a62a-0019d4b3cac2" Dec 12 18:48:09.780517 kubelet[2765]: I1212 18:48:09.780439 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mln9t" podStartSLOduration=44.780417287 podStartE2EDuration="44.780417287s" podCreationTimestamp="2025-12-12 18:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:48:09.776552298 +0000 UTC m=+50.407026626" watchObservedRunningTime="2025-12-12 18:48:09.780417287 +0000 UTC m=+50.410891615" Dec 12 18:48:09.791108 containerd[1579]: time="2025-12-12T18:48:09.791038661Z" level=info msg="connecting to shim da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d" address="unix:///run/containerd/s/5bdb42df63df2b721480624362ab487070c2e075f6c6bd4428a98b67ae216669" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:48:09.866162 systemd[1]: Started cri-containerd-da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d.scope - libcontainer container da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d. Dec 12 18:48:09.915694 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:48:09.984075 containerd[1579]: time="2025-12-12T18:48:09.983980109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:09.996817 containerd[1579]: time="2025-12-12T18:48:09.996431622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:48:09.996992 containerd[1579]: time="2025-12-12T18:48:09.996883001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:48:09.997286 kubelet[2765]: E1212 18:48:09.997207 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:48:09.997415 kubelet[2765]: E1212 18:48:09.997299 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:48:09.998242 kubelet[2765]: E1212 18:48:09.998121 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hbld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vxcm2_calico-system(5fa2bd70-6779-4823-84fb-43f19b5a18cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:10.001359 containerd[1579]: time="2025-12-12T18:48:09.998570798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:48:10.039698 systemd-networkd[1493]: cali4403503d132: Gained IPv6LL Dec 12 18:48:10.049083 containerd[1579]: time="2025-12-12T18:48:10.048397978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76c8b888c6-r72lf,Uid:1fb0cd74-e8a3-44e5-8349-27377736245d,Namespace:calico-system,Attempt:0,} returns sandbox id \"da3e0a9de8bb000c10c1533e5348ec4d1e1a54c7e8d7712e6388225c1fe6049d\"" Dec 12 18:48:10.071105 systemd-networkd[1493]: cali3cf2939b1f5: Link UP Dec 12 18:48:10.073557 systemd-networkd[1493]: cali3cf2939b1f5: Gained carrier Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.712 [INFO][4479] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.784 [INFO][4479] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--b9hfz-eth0 goldmane-666569f655- calico-system 13ccfabf-6529-4c53-843d-bc0433af6501 836 0 2025-12-12 18:47:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-b9hfz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3cf2939b1f5 [] [] }} ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Namespace="calico-system" Pod="goldmane-666569f655-b9hfz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--b9hfz-" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.786 [INFO][4479] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Namespace="calico-system" Pod="goldmane-666569f655-b9hfz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--b9hfz-eth0" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.965 [INFO][4545] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" HandleID="k8s-pod-network.bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Workload="localhost-k8s-goldmane--666569f655--b9hfz-eth0" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.966 [INFO][4545] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" HandleID="k8s-pod-network.bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Workload="localhost-k8s-goldmane--666569f655--b9hfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000497b20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-b9hfz", "timestamp":"2025-12-12 18:48:09.965793081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.967 [INFO][4545] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.967 [INFO][4545] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.967 [INFO][4545] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.981 [INFO][4545] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" host="localhost" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.988 [INFO][4545] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:09.995 [INFO][4545] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:10.001 [INFO][4545] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:10.008 [INFO][4545] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:10.010 [INFO][4545] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" host="localhost" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:10.016 [INFO][4545] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496 Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:10.031 [INFO][4545] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" host="localhost" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:10.054 [INFO][4545] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" host="localhost" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:10.056 [INFO][4545] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" host="localhost" Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:10.056 [INFO][4545] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:48:10.105726 containerd[1579]: 2025-12-12 18:48:10.056 [INFO][4545] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" HandleID="k8s-pod-network.bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Workload="localhost-k8s-goldmane--666569f655--b9hfz-eth0" Dec 12 18:48:10.106385 containerd[1579]: 2025-12-12 18:48:10.065 [INFO][4479] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Namespace="calico-system" Pod="goldmane-666569f655-b9hfz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--b9hfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--b9hfz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"13ccfabf-6529-4c53-843d-bc0433af6501", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-b9hfz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3cf2939b1f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:10.106385 containerd[1579]: 2025-12-12 18:48:10.065 [INFO][4479] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Namespace="calico-system" Pod="goldmane-666569f655-b9hfz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--b9hfz-eth0" Dec 12 18:48:10.106385 containerd[1579]: 2025-12-12 18:48:10.065 [INFO][4479] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cf2939b1f5 ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Namespace="calico-system" Pod="goldmane-666569f655-b9hfz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--b9hfz-eth0" Dec 12 18:48:10.106385 containerd[1579]: 2025-12-12 18:48:10.076 [INFO][4479] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Namespace="calico-system" Pod="goldmane-666569f655-b9hfz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--b9hfz-eth0" Dec 12 18:48:10.106385 containerd[1579]: 2025-12-12 18:48:10.077 [INFO][4479] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Namespace="calico-system" Pod="goldmane-666569f655-b9hfz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--b9hfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--b9hfz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"13ccfabf-6529-4c53-843d-bc0433af6501", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496", Pod:"goldmane-666569f655-b9hfz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3cf2939b1f5", MAC:"42:77:f9:11:20:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:10.106385 containerd[1579]: 2025-12-12 18:48:10.096 [INFO][4479] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" Namespace="calico-system" Pod="goldmane-666569f655-b9hfz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--b9hfz-eth0" Dec 12 18:48:10.160824 containerd[1579]: time="2025-12-12T18:48:10.160540042Z" level=info msg="connecting to shim bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496" address="unix:///run/containerd/s/4f128884907ad95e27b917d97e7bad7dddd200855ee721c9b9b60f1bb820957b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:48:10.173819 systemd-networkd[1493]: cali528051c6475: Gained IPv6LL Dec 12 18:48:10.200273 systemd[1]: Started cri-containerd-bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496.scope - libcontainer container bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496. Dec 12 18:48:10.202665 systemd-networkd[1493]: calie9515da7e35: Link UP Dec 12 18:48:10.204096 systemd-networkd[1493]: calie9515da7e35: Gained carrier Dec 12 18:48:10.226986 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:09.727 [INFO][4491] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:09.804 [INFO][4491] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0 coredns-668d6bf9bc- kube-system 654900fe-d2aa-4e5a-ada8-5cb594e19e6a 838 0 2025-12-12 18:47:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4f5zg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie9515da7e35 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f5zg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f5zg-" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:09.804 [INFO][4491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f5zg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:09.982 [INFO][4575] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" HandleID="k8s-pod-network.23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Workload="localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:09.982 [INFO][4575] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" HandleID="k8s-pod-network.23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Workload="localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000446a60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4f5zg", "timestamp":"2025-12-12 18:48:09.982007135 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:09.982 [INFO][4575] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.056 [INFO][4575] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.057 [INFO][4575] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.083 [INFO][4575] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" host="localhost" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.113 [INFO][4575] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.131 [INFO][4575] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.136 [INFO][4575] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.140 [INFO][4575] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.140 [INFO][4575] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" host="localhost" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.142 [INFO][4575] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.152 [INFO][4575] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" host="localhost" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.178 [INFO][4575] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" host="localhost" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.178 [INFO][4575] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" host="localhost" Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.178 [INFO][4575] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:48:10.254567 containerd[1579]: 2025-12-12 18:48:10.178 [INFO][4575] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" HandleID="k8s-pod-network.23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Workload="localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0" Dec 12 18:48:10.255392 containerd[1579]: 2025-12-12 18:48:10.195 [INFO][4491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f5zg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"654900fe-d2aa-4e5a-ada8-5cb594e19e6a", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4f5zg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9515da7e35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:10.255392 containerd[1579]: 2025-12-12 18:48:10.195 [INFO][4491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f5zg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0" Dec 12 18:48:10.255392 containerd[1579]: 2025-12-12 18:48:10.195 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9515da7e35 ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f5zg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0" Dec 12 18:48:10.255392 containerd[1579]: 2025-12-12 18:48:10.205 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f5zg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0" Dec 12 18:48:10.255392 containerd[1579]: 2025-12-12 18:48:10.206 [INFO][4491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f5zg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"654900fe-d2aa-4e5a-ada8-5cb594e19e6a", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e", Pod:"coredns-668d6bf9bc-4f5zg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9515da7e35", MAC:"6a:51:7b:61:c4:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:10.255392 containerd[1579]: 2025-12-12 18:48:10.242 [INFO][4491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" Namespace="kube-system" Pod="coredns-668d6bf9bc-4f5zg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4f5zg-eth0" Dec 12 18:48:10.278685 containerd[1579]: time="2025-12-12T18:48:10.278627456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b9hfz,Uid:13ccfabf-6529-4c53-843d-bc0433af6501,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb075371298e5811c53cddc87fdc4dbd59c4576a4f2534ccae0f5a58cb7f1496\"" Dec 12 18:48:10.311053 containerd[1579]: time="2025-12-12T18:48:10.310981771Z" level=info msg="connecting to shim 23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e" address="unix:///run/containerd/s/22c2f7ea6a119c5eaed2c4f89a1de6d11d095f2f71fa58d66b754e002a5cfccb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:48:10.342854 systemd[1]: Started cri-containerd-23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e.scope - libcontainer container 23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e. Dec 12 18:48:10.359851 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:48:10.404567 containerd[1579]: time="2025-12-12T18:48:10.404523926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4f5zg,Uid:654900fe-d2aa-4e5a-ada8-5cb594e19e6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e\"" Dec 12 18:48:10.405473 kubelet[2765]: E1212 18:48:10.405366 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:10.407314 containerd[1579]: time="2025-12-12T18:48:10.407287040Z" level=info msg="CreateContainer within sandbox \"23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:48:10.407758 containerd[1579]: time="2025-12-12T18:48:10.407631807Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:10.409718 containerd[1579]: time="2025-12-12T18:48:10.409674580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:48:10.409976 containerd[1579]: time="2025-12-12T18:48:10.409850309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:48:10.410956 kubelet[2765]: E1212 18:48:10.410856 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:48:10.410956 kubelet[2765]: E1212 18:48:10.410914 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:48:10.411337 kubelet[2765]: E1212 18:48:10.411170 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r57lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cdf74c998-54c4h_calico-system(bbf4b148-f944-43dd-959c-c1fed4f278a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:10.411743 containerd[1579]: time="2025-12-12T18:48:10.411468703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:48:10.414637 kubelet[2765]: E1212 18:48:10.414550 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" podUID="bbf4b148-f944-43dd-959c-c1fed4f278a2" Dec 12 18:48:10.439614 containerd[1579]: time="2025-12-12T18:48:10.439425178Z" level=info msg="Container ad6baf0939f74e0e8eb53fc6d7c082e95852ba250b805e07b75339f4b97806e6: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:48:10.446903 systemd-networkd[1493]: vxlan.calico: Link UP Dec 12 18:48:10.446912 systemd-networkd[1493]: vxlan.calico: Gained carrier Dec 12 18:48:10.454040 containerd[1579]: time="2025-12-12T18:48:10.453966982Z" level=info msg="CreateContainer within sandbox \"23714a7d63d4827457bc5cd0d85ded2ecd1e6148b8045a5c593e7384e501131e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad6baf0939f74e0e8eb53fc6d7c082e95852ba250b805e07b75339f4b97806e6\"" Dec 12 18:48:10.456077 containerd[1579]: time="2025-12-12T18:48:10.456047499Z" level=info msg="StartContainer for \"ad6baf0939f74e0e8eb53fc6d7c082e95852ba250b805e07b75339f4b97806e6\"" Dec 12 18:48:10.458049 containerd[1579]: time="2025-12-12T18:48:10.458022101Z" level=info msg="connecting to shim ad6baf0939f74e0e8eb53fc6d7c082e95852ba250b805e07b75339f4b97806e6" address="unix:///run/containerd/s/22c2f7ea6a119c5eaed2c4f89a1de6d11d095f2f71fa58d66b754e002a5cfccb" protocol=ttrpc version=3 Dec 12 18:48:10.480901 systemd[1]: Started cri-containerd-ad6baf0939f74e0e8eb53fc6d7c082e95852ba250b805e07b75339f4b97806e6.scope - libcontainer container ad6baf0939f74e0e8eb53fc6d7c082e95852ba250b805e07b75339f4b97806e6. Dec 12 18:48:10.535977 containerd[1579]: time="2025-12-12T18:48:10.535891097Z" level=info msg="StartContainer for \"ad6baf0939f74e0e8eb53fc6d7c082e95852ba250b805e07b75339f4b97806e6\" returns successfully" Dec 12 18:48:10.549676 containerd[1579]: time="2025-12-12T18:48:10.549605245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d75cf95-jcxmc,Uid:50a60128-897b-496b-b3a2-5d063bc81d6b,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:48:10.678774 systemd-networkd[1493]: calic12490a0fa9: Gained IPv6LL Dec 12 18:48:10.742925 kubelet[2765]: E1212 18:48:10.742789 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:10.747762 kubelet[2765]: E1212 18:48:10.747734 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:10.748039 kubelet[2765]: E1212 18:48:10.748016 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" podUID="bbf4b148-f944-43dd-959c-c1fed4f278a2" Dec 12 18:48:10.748148 kubelet[2765]: E1212 18:48:10.748087 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" podUID="25b4857d-c252-4226-a62a-0019d4b3cac2" Dec 12 18:48:10.775616 containerd[1579]: time="2025-12-12T18:48:10.775424098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:10.777305 systemd-networkd[1493]: cali9a45e7f70e0: Link UP Dec 12 18:48:10.778384 systemd-networkd[1493]: cali9a45e7f70e0: Gained carrier Dec 12 18:48:10.787869 containerd[1579]: time="2025-12-12T18:48:10.787583804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:48:10.787869 containerd[1579]: time="2025-12-12T18:48:10.787768961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:48:10.788955 kubelet[2765]: E1212 18:48:10.788457 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:48:10.788955 kubelet[2765]: E1212 18:48:10.788520 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:48:10.788955 kubelet[2765]: E1212 18:48:10.788759 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hbld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vxcm2_calico-system(5fa2bd70-6779-4823-84fb-43f19b5a18cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:10.788955 kubelet[2765]: I1212 18:48:10.788802 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4f5zg" podStartSLOduration=45.788573184 podStartE2EDuration="45.788573184s" podCreationTimestamp="2025-12-12 18:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:48:10.781724662 +0000 UTC m=+51.412199010" watchObservedRunningTime="2025-12-12 18:48:10.788573184 +0000 UTC m=+51.419047512" Dec 12 18:48:10.790355 containerd[1579]: time="2025-12-12T18:48:10.789742802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:48:10.791788 kubelet[2765]: E1212 18:48:10.790524 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.646 [INFO][4807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0 calico-apiserver-846d75cf95- calico-apiserver 50a60128-897b-496b-b3a2-5d063bc81d6b 835 0 2025-12-12 18:47:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:846d75cf95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-846d75cf95-jcxmc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9a45e7f70e0 [] [] }} ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-jcxmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.646 [INFO][4807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-jcxmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.686 [INFO][4819] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" HandleID="k8s-pod-network.02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Workload="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.686 [INFO][4819] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" HandleID="k8s-pod-network.02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Workload="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e2fd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-846d75cf95-jcxmc", "timestamp":"2025-12-12 18:48:10.68679132 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.687 [INFO][4819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.687 [INFO][4819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.687 [INFO][4819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.694 [INFO][4819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" host="localhost" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.700 [INFO][4819] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.704 [INFO][4819] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.706 [INFO][4819] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.708 [INFO][4819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.708 [INFO][4819] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" host="localhost" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.709 [INFO][4819] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949 Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.756 [INFO][4819] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" host="localhost" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.768 [INFO][4819] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" host="localhost" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.768 [INFO][4819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" host="localhost" Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.768 [INFO][4819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:48:10.826182 containerd[1579]: 2025-12-12 18:48:10.768 [INFO][4819] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" HandleID="k8s-pod-network.02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Workload="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0" Dec 12 18:48:10.828601 containerd[1579]: 2025-12-12 18:48:10.772 [INFO][4807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-jcxmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0", GenerateName:"calico-apiserver-846d75cf95-", Namespace:"calico-apiserver", SelfLink:"", UID:"50a60128-897b-496b-b3a2-5d063bc81d6b", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d75cf95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-846d75cf95-jcxmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a45e7f70e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:10.828601 containerd[1579]: 2025-12-12 18:48:10.773 [INFO][4807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-jcxmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0" Dec 12 18:48:10.828601 containerd[1579]: 2025-12-12 18:48:10.773 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a45e7f70e0 ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-jcxmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0" Dec 12 18:48:10.828601 containerd[1579]: 2025-12-12 18:48:10.778 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-jcxmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0" Dec 12 18:48:10.828601 containerd[1579]: 2025-12-12 18:48:10.784 [INFO][4807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-jcxmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0", GenerateName:"calico-apiserver-846d75cf95-", Namespace:"calico-apiserver", SelfLink:"", UID:"50a60128-897b-496b-b3a2-5d063bc81d6b", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"846d75cf95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949", Pod:"calico-apiserver-846d75cf95-jcxmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a45e7f70e0", MAC:"c2:23:98:17:47:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:48:10.828601 containerd[1579]: 2025-12-12 18:48:10.816 [INFO][4807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" Namespace="calico-apiserver" Pod="calico-apiserver-846d75cf95-jcxmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--846d75cf95--jcxmc-eth0" Dec 12 18:48:10.871854 systemd-networkd[1493]: cali87c8c373384: Gained IPv6LL Dec 12 18:48:10.949760 containerd[1579]: time="2025-12-12T18:48:10.948565059Z" level=info msg="connecting to shim 02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949" address="unix:///run/containerd/s/ac360773c79eefe9a2992f001b8b96ced8d8364c0fc5beee3fe0788d56312305" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:48:10.985228 systemd[1]: Started cri-containerd-02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949.scope - libcontainer container 02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949. Dec 12 18:48:11.011376 systemd-resolved[1394]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:48:11.073122 containerd[1579]: time="2025-12-12T18:48:11.073068567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-846d75cf95-jcxmc,Uid:50a60128-897b-496b-b3a2-5d063bc81d6b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"02a9465efffb090dd691d90efd021c791b8f35760f8dc273be6cec144019d949\"" Dec 12 18:48:11.141435 containerd[1579]: time="2025-12-12T18:48:11.141375267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:11.143373 containerd[1579]: time="2025-12-12T18:48:11.143303627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:48:11.143543 containerd[1579]: time="2025-12-12T18:48:11.143396706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:48:11.143719 kubelet[2765]: E1212 18:48:11.143657 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:48:11.143810 kubelet[2765]: E1212 18:48:11.143737 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:48:11.144119 kubelet[2765]: E1212 18:48:11.144039 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0982576f70ac4c0ba90c28c35eaa8148,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnh79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76c8b888c6-r72lf_calico-system(1fb0cd74-e8a3-44e5-8349-27377736245d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:11.144331 containerd[1579]: time="2025-12-12T18:48:11.144151011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:48:11.318804 systemd-networkd[1493]: calia4f17315649: Gained IPv6LL Dec 12 18:48:11.477846 containerd[1579]: time="2025-12-12T18:48:11.477773468Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:11.480493 containerd[1579]: time="2025-12-12T18:48:11.480439070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:48:11.480572 containerd[1579]: time="2025-12-12T18:48:11.480506962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:11.480773 kubelet[2765]: E1212 18:48:11.480721 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:48:11.480841 kubelet[2765]: E1212 18:48:11.480782 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:48:11.481125 kubelet[2765]: E1212 18:48:11.481054 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcxpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-b9hfz_calico-system(13ccfabf-6529-4c53-843d-bc0433af6501): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:11.481296 containerd[1579]: time="2025-12-12T18:48:11.481133400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:48:11.482374 kubelet[2765]: E1212 18:48:11.482330 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b9hfz" podUID="13ccfabf-6529-4c53-843d-bc0433af6501" Dec 12 18:48:11.640102 systemd-networkd[1493]: cali3cf2939b1f5: Gained IPv6LL Dec 12 18:48:11.754656 kubelet[2765]: E1212 18:48:11.754608 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:11.755942 kubelet[2765]: E1212 18:48:11.755907 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b9hfz" podUID="13ccfabf-6529-4c53-843d-bc0433af6501" Dec 12 18:48:11.756420 kubelet[2765]: E1212 18:48:11.756360 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:48:11.799159 containerd[1579]: time="2025-12-12T18:48:11.799022486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:11.800572 containerd[1579]: time="2025-12-12T18:48:11.800328686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:48:11.800572 containerd[1579]: time="2025-12-12T18:48:11.800408730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:11.800705 kubelet[2765]: E1212 18:48:11.800562 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:11.800705 kubelet[2765]: E1212 18:48:11.800629 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:11.800991 kubelet[2765]: E1212 18:48:11.800932 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8rcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-846d75cf95-jcxmc_calico-apiserver(50a60128-897b-496b-b3a2-5d063bc81d6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:11.801289 containerd[1579]: time="2025-12-12T18:48:11.801233401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:48:11.802542 kubelet[2765]: E1212 18:48:11.802468 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" podUID="50a60128-897b-496b-b3a2-5d063bc81d6b" Dec 12 18:48:11.894849 systemd-networkd[1493]: calie9515da7e35: Gained IPv6LL Dec 12 18:48:11.958793 systemd-networkd[1493]: vxlan.calico: Gained IPv6LL Dec 12 18:48:12.163407 containerd[1579]: time="2025-12-12T18:48:12.163233799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:12.164528 containerd[1579]: time="2025-12-12T18:48:12.164469902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:48:12.164528 containerd[1579]: time="2025-12-12T18:48:12.164522964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:48:12.164808 kubelet[2765]: E1212 18:48:12.164747 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:48:12.164868 kubelet[2765]: E1212 18:48:12.164811 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:48:12.164963 kubelet[2765]: E1212 18:48:12.164930 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnh79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76c8b888c6-r72lf_calico-system(1fb0cd74-e8a3-44e5-8349-27377736245d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:12.166343 kubelet[2765]: E1212 18:48:12.166293 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c8b888c6-r72lf" podUID="1fb0cd74-e8a3-44e5-8349-27377736245d" Dec 12 18:48:12.598885 systemd-networkd[1493]: cali9a45e7f70e0: Gained IPv6LL Dec 12 18:48:12.757708 kubelet[2765]: E1212 18:48:12.757579 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:12.758342 kubelet[2765]: E1212 18:48:12.758304 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" podUID="50a60128-897b-496b-b3a2-5d063bc81d6b" Dec 12 18:48:12.758822 kubelet[2765]: E1212 18:48:12.758767 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c8b888c6-r72lf" podUID="1fb0cd74-e8a3-44e5-8349-27377736245d" Dec 12 18:48:14.673208 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:57208.service - OpenSSH per-connection server daemon (10.0.0.1:57208). Dec 12 18:48:14.744614 sshd[4930]: Accepted publickey for core from 10.0.0.1 port 57208 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:14.746821 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:14.751329 systemd-logind[1562]: New session 11 of user core. Dec 12 18:48:14.758746 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:48:14.927232 sshd[4933]: Connection closed by 10.0.0.1 port 57208 Dec 12 18:48:14.927483 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:14.932149 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:57208.service: Deactivated successfully. Dec 12 18:48:14.934157 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:48:14.935084 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:48:14.936358 systemd-logind[1562]: Removed session 11. Dec 12 18:48:19.952641 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:57210.service - OpenSSH per-connection server daemon (10.0.0.1:57210). Dec 12 18:48:20.018449 sshd[4957]: Accepted publickey for core from 10.0.0.1 port 57210 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:20.020119 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:20.024668 systemd-logind[1562]: New session 12 of user core. Dec 12 18:48:20.035770 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:48:20.160332 sshd[4960]: Connection closed by 10.0.0.1 port 57210 Dec 12 18:48:20.160698 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:20.163941 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:57210.service: Deactivated successfully. Dec 12 18:48:20.166125 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:48:20.167753 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:48:20.169052 systemd-logind[1562]: Removed session 12. Dec 12 18:48:23.545746 containerd[1579]: time="2025-12-12T18:48:23.545664698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:48:23.889909 containerd[1579]: time="2025-12-12T18:48:23.889752751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:23.902748 containerd[1579]: time="2025-12-12T18:48:23.902661728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:48:23.902889 containerd[1579]: time="2025-12-12T18:48:23.902757090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:23.903006 kubelet[2765]: E1212 18:48:23.902936 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:23.903401 kubelet[2765]: E1212 18:48:23.903015 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:23.903401 kubelet[2765]: E1212 18:48:23.903161 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8fk5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-846d75cf95-ph74m_calico-apiserver(25b4857d-c252-4226-a62a-0019d4b3cac2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:23.904413 kubelet[2765]: E1212 18:48:23.904355 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" podUID="25b4857d-c252-4226-a62a-0019d4b3cac2" Dec 12 18:48:24.545091 containerd[1579]: time="2025-12-12T18:48:24.545018185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:48:24.938734 containerd[1579]: time="2025-12-12T18:48:24.938682761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:24.940262 containerd[1579]: time="2025-12-12T18:48:24.940129238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:48:24.940262 containerd[1579]: time="2025-12-12T18:48:24.940238286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:48:24.940494 kubelet[2765]: E1212 18:48:24.940449 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:48:24.940905 kubelet[2765]: E1212 18:48:24.940516 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:48:24.941461 containerd[1579]: time="2025-12-12T18:48:24.940977831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:48:24.941553 kubelet[2765]: E1212 18:48:24.941112 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r57lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cdf74c998-54c4h_calico-system(bbf4b148-f944-43dd-959c-c1fed4f278a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:24.942453 kubelet[2765]: E1212 18:48:24.942407 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" podUID="bbf4b148-f944-43dd-959c-c1fed4f278a2" Dec 12 18:48:25.175356 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:40798.service - OpenSSH per-connection server daemon (10.0.0.1:40798). Dec 12 18:48:25.239092 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 40798 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:25.241349 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:25.246747 systemd-logind[1562]: New session 13 of user core. Dec 12 18:48:25.256782 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:48:25.284341 containerd[1579]: time="2025-12-12T18:48:25.284283390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:25.355568 containerd[1579]: time="2025-12-12T18:48:25.355342166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:48:25.355568 containerd[1579]: time="2025-12-12T18:48:25.355454572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:48:25.356648 kubelet[2765]: E1212 18:48:25.355934 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:48:25.356648 kubelet[2765]: E1212 18:48:25.356071 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:48:25.356648 kubelet[2765]: E1212 18:48:25.356225 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hbld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vxcm2_calico-system(5fa2bd70-6779-4823-84fb-43f19b5a18cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:25.359416 containerd[1579]: time="2025-12-12T18:48:25.359374987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:48:25.461090 sshd[4985]: Connection closed by 10.0.0.1 port 40798 Dec 12 18:48:25.459639 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:25.474186 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:40798.service: Deactivated successfully. Dec 12 18:48:25.476818 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:48:25.478181 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:48:25.483188 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:40802.service - OpenSSH per-connection server daemon (10.0.0.1:40802). Dec 12 18:48:25.484175 systemd-logind[1562]: Removed session 13. Dec 12 18:48:25.557945 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 40802 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:25.560084 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:25.565743 systemd-logind[1562]: New session 14 of user core. Dec 12 18:48:25.578800 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:48:25.819974 sshd[5003]: Connection closed by 10.0.0.1 port 40802 Dec 12 18:48:25.820414 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:25.832866 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:40818.service - OpenSSH per-connection server daemon (10.0.0.1:40818). Dec 12 18:48:25.841013 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:40802.service: Deactivated successfully. Dec 12 18:48:25.852331 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:48:25.855662 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:48:25.856879 systemd-logind[1562]: Removed session 14. Dec 12 18:48:25.876563 containerd[1579]: time="2025-12-12T18:48:25.876497482Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:25.877940 containerd[1579]: time="2025-12-12T18:48:25.877884003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:48:25.878063 containerd[1579]: time="2025-12-12T18:48:25.877947153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:48:25.878230 kubelet[2765]: E1212 18:48:25.878165 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:48:25.878230 kubelet[2765]: E1212 18:48:25.878232 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:48:25.878461 kubelet[2765]: E1212 18:48:25.878381 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hbld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vxcm2_calico-system(5fa2bd70-6779-4823-84fb-43f19b5a18cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:25.879708 kubelet[2765]: E1212 18:48:25.879655 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:48:25.911711 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 40818 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:25.913928 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:25.919599 systemd-logind[1562]: New session 15 of user core. Dec 12 18:48:25.930805 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:48:26.087256 sshd[5017]: Connection closed by 10.0.0.1 port 40818 Dec 12 18:48:26.087580 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:26.091964 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:40818.service: Deactivated successfully. Dec 12 18:48:26.094208 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:48:26.095862 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:48:26.097368 systemd-logind[1562]: Removed session 15. Dec 12 18:48:26.545491 containerd[1579]: time="2025-12-12T18:48:26.545449316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:48:26.874559 containerd[1579]: time="2025-12-12T18:48:26.874412505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:26.914534 containerd[1579]: time="2025-12-12T18:48:26.914435496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:48:26.914703 containerd[1579]: time="2025-12-12T18:48:26.914644555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:26.914932 kubelet[2765]: E1212 18:48:26.914870 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:48:26.915356 kubelet[2765]: E1212 18:48:26.914936 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:48:26.915356 kubelet[2765]: E1212 18:48:26.915268 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcxpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-b9hfz_calico-system(13ccfabf-6529-4c53-843d-bc0433af6501): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:26.915663 containerd[1579]: time="2025-12-12T18:48:26.915576497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:48:26.916635 kubelet[2765]: E1212 18:48:26.916531 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b9hfz" podUID="13ccfabf-6529-4c53-843d-bc0433af6501" Dec 12 18:48:27.262715 containerd[1579]: time="2025-12-12T18:48:27.262626444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:27.264577 containerd[1579]: time="2025-12-12T18:48:27.264511915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:48:27.264577 containerd[1579]: time="2025-12-12T18:48:27.264556872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:27.264907 kubelet[2765]: E1212 18:48:27.264838 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:27.264907 kubelet[2765]: E1212 18:48:27.264899 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:27.265375 kubelet[2765]: E1212 18:48:27.265224 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8rcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-846d75cf95-jcxmc_calico-apiserver(50a60128-897b-496b-b3a2-5d063bc81d6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:27.265564 containerd[1579]: time="2025-12-12T18:48:27.265279702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:48:27.266789 kubelet[2765]: E1212 18:48:27.266691 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" podUID="50a60128-897b-496b-b3a2-5d063bc81d6b" Dec 12 18:48:27.576093 containerd[1579]: time="2025-12-12T18:48:27.575885201Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:27.750212 containerd[1579]: time="2025-12-12T18:48:27.750111524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:48:27.750395 containerd[1579]: time="2025-12-12T18:48:27.750147653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:48:27.750502 kubelet[2765]: E1212 18:48:27.750435 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:48:27.750551 kubelet[2765]: E1212 18:48:27.750509 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:48:27.750740 kubelet[2765]: E1212 18:48:27.750659 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0982576f70ac4c0ba90c28c35eaa8148,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnh79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76c8b888c6-r72lf_calico-system(1fb0cd74-e8a3-44e5-8349-27377736245d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:27.752780 containerd[1579]: time="2025-12-12T18:48:27.752749433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:48:28.154044 containerd[1579]: time="2025-12-12T18:48:28.153956113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:28.187812 containerd[1579]: time="2025-12-12T18:48:28.187736103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:48:28.187995 containerd[1579]: time="2025-12-12T18:48:28.187790706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:48:28.188178 kubelet[2765]: E1212 18:48:28.188112 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:48:28.188580 kubelet[2765]: E1212 18:48:28.188188 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:48:28.188580 kubelet[2765]: E1212 18:48:28.188334 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnh79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76c8b888c6-r72lf_calico-system(1fb0cd74-e8a3-44e5-8349-27377736245d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:28.189621 kubelet[2765]: E1212 18:48:28.189521 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c8b888c6-r72lf" podUID="1fb0cd74-e8a3-44e5-8349-27377736245d" Dec 12 18:48:31.106998 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:33792.service - OpenSSH per-connection server daemon (10.0.0.1:33792). Dec 12 18:48:31.173643 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 33792 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:31.175813 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:31.180665 systemd-logind[1562]: New session 16 of user core. Dec 12 18:48:31.189768 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:48:31.316910 sshd[5047]: Connection closed by 10.0.0.1 port 33792 Dec 12 18:48:31.317264 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:31.322232 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:33792.service: Deactivated successfully. Dec 12 18:48:31.325125 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:48:31.326114 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:48:31.327480 systemd-logind[1562]: Removed session 16. Dec 12 18:48:36.353725 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:33804.service - OpenSSH per-connection server daemon (10.0.0.1:33804). Dec 12 18:48:36.439389 sshd[5060]: Accepted publickey for core from 10.0.0.1 port 33804 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:36.442228 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:36.461055 systemd-logind[1562]: New session 17 of user core. Dec 12 18:48:36.471961 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:48:36.865849 sshd[5063]: Connection closed by 10.0.0.1 port 33804 Dec 12 18:48:36.866562 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:36.879418 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:33804.service: Deactivated successfully. Dec 12 18:48:36.885347 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:48:36.888386 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:48:36.893372 systemd-logind[1562]: Removed session 17. Dec 12 18:48:38.546705 kubelet[2765]: E1212 18:48:38.545949 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:38.558980 kubelet[2765]: E1212 18:48:38.558773 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" podUID="25b4857d-c252-4226-a62a-0019d4b3cac2" Dec 12 18:48:38.558980 kubelet[2765]: E1212 18:48:38.558904 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" podUID="bbf4b148-f944-43dd-959c-c1fed4f278a2" Dec 12 18:48:39.022637 kubelet[2765]: E1212 18:48:39.022603 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:40.550220 kubelet[2765]: E1212 18:48:40.549955 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b9hfz" podUID="13ccfabf-6529-4c53-843d-bc0433af6501" Dec 12 18:48:40.554847 kubelet[2765]: E1212 18:48:40.550159 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" podUID="50a60128-897b-496b-b3a2-5d063bc81d6b" Dec 12 18:48:40.555448 kubelet[2765]: E1212 18:48:40.553490 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:48:40.555448 kubelet[2765]: E1212 18:48:40.555415 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c8b888c6-r72lf" podUID="1fb0cd74-e8a3-44e5-8349-27377736245d" Dec 12 18:48:41.882467 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:55548.service - OpenSSH per-connection server daemon (10.0.0.1:55548). Dec 12 18:48:41.969628 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 55548 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:41.971444 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:41.980920 systemd-logind[1562]: New session 18 of user core. Dec 12 18:48:42.001309 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:48:42.154235 sshd[5107]: Connection closed by 10.0.0.1 port 55548 Dec 12 18:48:42.155660 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:42.165053 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:55548.service: Deactivated successfully. Dec 12 18:48:42.171561 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:48:42.172509 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:48:42.173933 systemd-logind[1562]: Removed session 18. Dec 12 18:48:44.017826 update_engine[1566]: I20251212 18:48:44.017736 1566 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 12 18:48:44.017826 update_engine[1566]: I20251212 18:48:44.017807 1566 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 12 18:48:44.019552 update_engine[1566]: I20251212 18:48:44.019498 1566 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 12 18:48:44.020257 update_engine[1566]: I20251212 18:48:44.020206 1566 omaha_request_params.cc:62] Current group set to stable Dec 12 18:48:44.020389 update_engine[1566]: I20251212 18:48:44.020354 1566 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 12 18:48:44.020389 update_engine[1566]: I20251212 18:48:44.020367 1566 update_attempter.cc:643] Scheduling an action processor start. Dec 12 18:48:44.020453 update_engine[1566]: I20251212 18:48:44.020390 1566 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 18:48:44.020487 update_engine[1566]: I20251212 18:48:44.020452 1566 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 12 18:48:44.021640 update_engine[1566]: I20251212 18:48:44.020543 1566 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 18:48:44.021640 update_engine[1566]: I20251212 18:48:44.020562 1566 omaha_request_action.cc:272] Request: Dec 12 18:48:44.021640 update_engine[1566]: Dec 12 18:48:44.021640 update_engine[1566]: Dec 12 18:48:44.021640 update_engine[1566]: Dec 12 18:48:44.021640 update_engine[1566]: Dec 12 18:48:44.021640 update_engine[1566]: Dec 12 18:48:44.021640 update_engine[1566]: Dec 12 18:48:44.021640 update_engine[1566]: Dec 12 18:48:44.021640 update_engine[1566]: Dec 12 18:48:44.021640 update_engine[1566]: I20251212 18:48:44.020571 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 18:48:44.027638 update_engine[1566]: I20251212 18:48:44.027489 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 18:48:44.028425 update_engine[1566]: I20251212 18:48:44.028385 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 18:48:44.031578 locksmithd[1616]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 12 18:48:44.035463 update_engine[1566]: E20251212 18:48:44.035400 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 12 18:48:44.035542 update_engine[1566]: I20251212 18:48:44.035523 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 12 18:48:47.172119 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:55556.service - OpenSSH per-connection server daemon (10.0.0.1:55556). Dec 12 18:48:47.255466 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 55556 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:47.258544 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:47.264688 systemd-logind[1562]: New session 19 of user core. Dec 12 18:48:47.270845 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:48:47.402394 sshd[5125]: Connection closed by 10.0.0.1 port 55556 Dec 12 18:48:47.402767 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:47.414831 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:55556.service: Deactivated successfully. Dec 12 18:48:47.417736 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:48:47.418858 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:48:47.423987 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:55566.service - OpenSSH per-connection server daemon (10.0.0.1:55566). Dec 12 18:48:47.425115 systemd-logind[1562]: Removed session 19. Dec 12 18:48:47.485857 sshd[5138]: Accepted publickey for core from 10.0.0.1 port 55566 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:47.488018 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:47.493700 systemd-logind[1562]: New session 20 of user core. Dec 12 18:48:47.502840 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:48:47.863443 sshd[5141]: Connection closed by 10.0.0.1 port 55566 Dec 12 18:48:47.863771 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:47.876582 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:55566.service: Deactivated successfully. Dec 12 18:48:47.878758 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:48:47.879629 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:48:47.882982 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:55578.service - OpenSSH per-connection server daemon (10.0.0.1:55578). Dec 12 18:48:47.883989 systemd-logind[1562]: Removed session 20. Dec 12 18:48:47.948493 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 55578 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:47.950459 sshd-session[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:47.955699 systemd-logind[1562]: New session 21 of user core. Dec 12 18:48:47.964740 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:48:48.556844 kubelet[2765]: E1212 18:48:48.556754 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:49.102841 sshd[5155]: Connection closed by 10.0.0.1 port 55578 Dec 12 18:48:49.106097 sshd-session[5152]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:49.118710 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:55578.service: Deactivated successfully. Dec 12 18:48:49.121427 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:48:49.123545 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:48:49.129156 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:55582.service - OpenSSH per-connection server daemon (10.0.0.1:55582). Dec 12 18:48:49.130929 systemd-logind[1562]: Removed session 21. Dec 12 18:48:49.196525 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 55582 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:49.198492 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:49.203971 systemd-logind[1562]: New session 22 of user core. Dec 12 18:48:49.212871 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:48:49.462467 sshd[5176]: Connection closed by 10.0.0.1 port 55582 Dec 12 18:48:49.461842 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:49.476016 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:55582.service: Deactivated successfully. Dec 12 18:48:49.478463 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:48:49.479319 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:48:49.483080 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:55596.service - OpenSSH per-connection server daemon (10.0.0.1:55596). Dec 12 18:48:49.484198 systemd-logind[1562]: Removed session 22. Dec 12 18:48:49.538828 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 55596 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:49.540875 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:49.547625 containerd[1579]: time="2025-12-12T18:48:49.546528310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:48:49.546742 systemd-logind[1562]: New session 23 of user core. Dec 12 18:48:49.556820 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:48:49.671956 sshd[5191]: Connection closed by 10.0.0.1 port 55596 Dec 12 18:48:49.674721 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:49.679053 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:55596.service: Deactivated successfully. Dec 12 18:48:49.681349 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:48:49.682939 systemd-logind[1562]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:48:49.684415 systemd-logind[1562]: Removed session 23. Dec 12 18:48:49.912328 containerd[1579]: time="2025-12-12T18:48:49.912151395Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:49.913610 containerd[1579]: time="2025-12-12T18:48:49.913531452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:48:49.913751 containerd[1579]: time="2025-12-12T18:48:49.913660556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:49.913885 kubelet[2765]: E1212 18:48:49.913811 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:49.913885 kubelet[2765]: E1212 18:48:49.913879 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:49.914269 kubelet[2765]: E1212 18:48:49.914075 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8fk5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-846d75cf95-ph74m_calico-apiserver(25b4857d-c252-4226-a62a-0019d4b3cac2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:49.915323 kubelet[2765]: E1212 18:48:49.915238 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" podUID="25b4857d-c252-4226-a62a-0019d4b3cac2" Dec 12 18:48:51.544475 kubelet[2765]: E1212 18:48:51.544423 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:52.546368 containerd[1579]: time="2025-12-12T18:48:52.545950882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:48:52.915391 containerd[1579]: time="2025-12-12T18:48:52.915345635Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:52.919436 containerd[1579]: time="2025-12-12T18:48:52.919372243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:48:52.919517 containerd[1579]: time="2025-12-12T18:48:52.919487571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:48:52.919747 kubelet[2765]: E1212 18:48:52.919681 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:48:52.920155 kubelet[2765]: E1212 18:48:52.919753 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:48:52.923908 kubelet[2765]: E1212 18:48:52.920038 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r57lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cdf74c998-54c4h_calico-system(bbf4b148-f944-43dd-959c-c1fed4f278a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:52.928618 containerd[1579]: time="2025-12-12T18:48:52.924403052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:48:52.928744 kubelet[2765]: E1212 18:48:52.928462 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" podUID="bbf4b148-f944-43dd-959c-c1fed4f278a2" Dec 12 18:48:53.004059 kernel: hrtimer: interrupt took 9684704 ns Dec 12 18:48:53.314707 containerd[1579]: time="2025-12-12T18:48:53.314292941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:53.321987 containerd[1579]: time="2025-12-12T18:48:53.321756266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:48:53.321987 containerd[1579]: time="2025-12-12T18:48:53.321909105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:53.324221 kubelet[2765]: E1212 18:48:53.322415 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:53.324221 kubelet[2765]: E1212 18:48:53.322485 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:53.324221 kubelet[2765]: E1212 18:48:53.323217 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8rcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-846d75cf95-jcxmc_calico-apiserver(50a60128-897b-496b-b3a2-5d063bc81d6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:53.330443 kubelet[2765]: E1212 18:48:53.330187 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" podUID="50a60128-897b-496b-b3a2-5d063bc81d6b" Dec 12 18:48:53.549638 kubelet[2765]: E1212 18:48:53.546791 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:48:54.001282 update_engine[1566]: I20251212 18:48:54.001111 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 18:48:54.001282 update_engine[1566]: I20251212 18:48:54.001262 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 18:48:54.002401 update_engine[1566]: I20251212 18:48:54.001771 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 18:48:54.008812 update_engine[1566]: E20251212 18:48:54.008186 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 12 18:48:54.008976 update_engine[1566]: I20251212 18:48:54.008893 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 12 18:48:54.563302 containerd[1579]: time="2025-12-12T18:48:54.562654958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:48:54.720367 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:38696.service - OpenSSH per-connection server daemon (10.0.0.1:38696). Dec 12 18:48:54.820846 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 38696 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:48:54.823642 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:54.844528 systemd-logind[1562]: New session 24 of user core. Dec 12 18:48:54.864633 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:48:55.035904 containerd[1579]: time="2025-12-12T18:48:55.035672972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:55.080117 containerd[1579]: time="2025-12-12T18:48:55.076848340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:48:55.080117 containerd[1579]: time="2025-12-12T18:48:55.076819374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:48:55.080117 containerd[1579]: time="2025-12-12T18:48:55.078997267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:48:55.080349 sshd[5215]: Connection closed by 10.0.0.1 port 38696 Dec 12 18:48:55.080732 kubelet[2765]: E1212 18:48:55.077331 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:48:55.080732 kubelet[2765]: E1212 18:48:55.077398 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:48:55.080732 kubelet[2765]: E1212 18:48:55.077690 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hbld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vxcm2_calico-system(5fa2bd70-6779-4823-84fb-43f19b5a18cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:55.080522 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:55.091292 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:38696.service: Deactivated successfully. Dec 12 18:48:55.099226 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:48:55.104324 systemd-logind[1562]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:48:55.106038 systemd-logind[1562]: Removed session 24. Dec 12 18:48:55.451410 containerd[1579]: time="2025-12-12T18:48:55.451266595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:55.454395 containerd[1579]: time="2025-12-12T18:48:55.454330946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:55.454644 containerd[1579]: time="2025-12-12T18:48:55.454451514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:48:55.460297 kubelet[2765]: E1212 18:48:55.460212 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:48:55.460297 kubelet[2765]: E1212 18:48:55.460290 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:48:55.460713 kubelet[2765]: E1212 18:48:55.460621 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcxpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-b9hfz_calico-system(13ccfabf-6529-4c53-843d-bc0433af6501): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:55.461018 containerd[1579]: time="2025-12-12T18:48:55.460925700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:48:55.462287 kubelet[2765]: E1212 18:48:55.462204 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b9hfz" podUID="13ccfabf-6529-4c53-843d-bc0433af6501" Dec 12 18:48:55.816819 containerd[1579]: time="2025-12-12T18:48:55.816415683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:55.827365 containerd[1579]: time="2025-12-12T18:48:55.827238592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:48:55.828255 containerd[1579]: time="2025-12-12T18:48:55.827646163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:48:55.828331 kubelet[2765]: E1212 18:48:55.827938 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:48:55.830983 kubelet[2765]: E1212 18:48:55.828007 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:48:55.830983 kubelet[2765]: E1212 18:48:55.828807 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hbld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vxcm2_calico-system(5fa2bd70-6779-4823-84fb-43f19b5a18cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:55.830983 kubelet[2765]: E1212 18:48:55.830875 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:48:55.834246 containerd[1579]: time="2025-12-12T18:48:55.833799772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:48:56.206777 containerd[1579]: time="2025-12-12T18:48:56.205962201Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:56.214065 containerd[1579]: time="2025-12-12T18:48:56.213867483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:48:56.214065 containerd[1579]: time="2025-12-12T18:48:56.214007337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:48:56.215160 kubelet[2765]: E1212 18:48:56.214459 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:48:56.215160 kubelet[2765]: E1212 18:48:56.214530 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:48:56.215160 kubelet[2765]: E1212 18:48:56.214698 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0982576f70ac4c0ba90c28c35eaa8148,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnh79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76c8b888c6-r72lf_calico-system(1fb0cd74-e8a3-44e5-8349-27377736245d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:56.219493 containerd[1579]: time="2025-12-12T18:48:56.219409312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:48:56.599186 containerd[1579]: time="2025-12-12T18:48:56.597920374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:56.604095 containerd[1579]: time="2025-12-12T18:48:56.601477747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:48:56.604095 containerd[1579]: time="2025-12-12T18:48:56.601552309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:48:56.604330 kubelet[2765]: E1212 18:48:56.601828 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:48:56.604330 kubelet[2765]: E1212 18:48:56.601900 2765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:48:56.604330 kubelet[2765]: E1212 18:48:56.602061 2765 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnh79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76c8b888c6-r72lf_calico-system(1fb0cd74-e8a3-44e5-8349-27377736245d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:56.604646 kubelet[2765]: E1212 18:48:56.604519 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c8b888c6-r72lf" podUID="1fb0cd74-e8a3-44e5-8349-27377736245d" Dec 12 18:49:00.104694 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:38708.service - OpenSSH per-connection server daemon (10.0.0.1:38708). Dec 12 18:49:00.183938 sshd[5234]: Accepted publickey for core from 10.0.0.1 port 38708 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:49:00.186511 sshd-session[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:49:00.193960 systemd-logind[1562]: New session 25 of user core. Dec 12 18:49:00.198675 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:49:00.317722 sshd[5237]: Connection closed by 10.0.0.1 port 38708 Dec 12 18:49:00.318064 sshd-session[5234]: pam_unix(sshd:session): session closed for user core Dec 12 18:49:00.322039 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:38708.service: Deactivated successfully. Dec 12 18:49:00.324127 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:49:00.324873 systemd-logind[1562]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:49:00.326044 systemd-logind[1562]: Removed session 25. Dec 12 18:49:01.545130 kubelet[2765]: E1212 18:49:01.545039 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-ph74m" podUID="25b4857d-c252-4226-a62a-0019d4b3cac2" Dec 12 18:49:03.545571 kubelet[2765]: E1212 18:49:03.545510 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cdf74c998-54c4h" podUID="bbf4b148-f944-43dd-959c-c1fed4f278a2" Dec 12 18:49:03.992823 update_engine[1566]: I20251212 18:49:03.992728 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 18:49:03.993309 update_engine[1566]: I20251212 18:49:03.992837 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 18:49:03.993992 update_engine[1566]: I20251212 18:49:03.993938 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 18:49:04.001275 update_engine[1566]: E20251212 18:49:04.001242 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 12 18:49:04.001345 update_engine[1566]: I20251212 18:49:04.001324 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 12 18:49:05.336442 systemd[1]: Started sshd@25-10.0.0.117:22-10.0.0.1:54488.service - OpenSSH per-connection server daemon (10.0.0.1:54488). Dec 12 18:49:05.410903 sshd[5250]: Accepted publickey for core from 10.0.0.1 port 54488 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:49:05.412988 sshd-session[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:49:05.417994 systemd-logind[1562]: New session 26 of user core. Dec 12 18:49:05.426830 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 18:49:05.574125 sshd[5253]: Connection closed by 10.0.0.1 port 54488 Dec 12 18:49:05.574331 sshd-session[5250]: pam_unix(sshd:session): session closed for user core Dec 12 18:49:05.581566 systemd-logind[1562]: Session 26 logged out. Waiting for processes to exit. Dec 12 18:49:05.584152 systemd[1]: sshd@25-10.0.0.117:22-10.0.0.1:54488.service: Deactivated successfully. Dec 12 18:49:05.589884 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 18:49:05.594794 systemd-logind[1562]: Removed session 26. Dec 12 18:49:06.545380 kubelet[2765]: E1212 18:49:06.545296 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b9hfz" podUID="13ccfabf-6529-4c53-843d-bc0433af6501" Dec 12 18:49:06.547130 kubelet[2765]: E1212 18:49:06.547089 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-846d75cf95-jcxmc" podUID="50a60128-897b-496b-b3a2-5d063bc81d6b" Dec 12 18:49:09.547613 kubelet[2765]: E1212 18:49:09.547472 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c8b888c6-r72lf" podUID="1fb0cd74-e8a3-44e5-8349-27377736245d" Dec 12 18:49:10.546416 kubelet[2765]: E1212 18:49:10.546349 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vxcm2" podUID="5fa2bd70-6779-4823-84fb-43f19b5a18cb" Dec 12 18:49:10.585473 systemd[1]: Started sshd@26-10.0.0.117:22-10.0.0.1:58224.service - OpenSSH per-connection server daemon (10.0.0.1:58224). Dec 12 18:49:10.653341 sshd[5292]: Accepted publickey for core from 10.0.0.1 port 58224 ssh2: RSA SHA256:Jfli01egnEuwhQmCMAJ3v0pvzQgHZ6Pm5V0wlrgQH5U Dec 12 18:49:10.655780 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:49:10.660939 systemd-logind[1562]: New session 27 of user core. Dec 12 18:49:10.666822 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 12 18:49:10.845028 sshd[5296]: Connection closed by 10.0.0.1 port 58224 Dec 12 18:49:10.845332 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Dec 12 18:49:10.850045 systemd[1]: sshd@26-10.0.0.117:22-10.0.0.1:58224.service: Deactivated successfully. Dec 12 18:49:10.852309 systemd[1]: session-27.scope: Deactivated successfully. Dec 12 18:49:10.853280 systemd-logind[1562]: Session 27 logged out. Waiting for processes to exit. Dec 12 18:49:10.855464 systemd-logind[1562]: Removed session 27. Dec 12 18:49:11.544749 kubelet[2765]: E1212 18:49:11.544711 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"