Dec 16 13:13:19.125699 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:13:19.125743 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:13:19.125767 kernel: BIOS-provided physical RAM map: Dec 16 13:13:19.125781 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 16 13:13:19.125800 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 16 13:13:19.125814 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 16 13:13:19.125831 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 16 13:13:19.125845 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 16 13:13:19.125859 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd318fff] usable Dec 16 13:13:19.125883 kernel: BIOS-e820: [mem 0x00000000bd319000-0x00000000bd322fff] ACPI data Dec 16 13:13:19.125898 kernel: BIOS-e820: [mem 0x00000000bd323000-0x00000000bf8ecfff] usable Dec 16 13:13:19.125912 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 16 13:13:19.125926 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 16 13:13:19.125956 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 16 13:13:19.125973 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 16 13:13:19.125994 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 16 13:13:19.126009 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 16 13:13:19.126023 kernel: NX (Execute Disable) protection: active Dec 16 13:13:19.126038 kernel: APIC: Static calls initialized Dec 16 13:13:19.126054 kernel: efi: EFI v2.7 by EDK II Dec 16 13:13:19.126071 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 RNG=0xbfb73018 TPMEventLog=0xbd319018 Dec 16 13:13:19.126087 kernel: random: crng init done Dec 16 13:13:19.126103 kernel: secureboot: Secure boot disabled Dec 16 13:13:19.126119 kernel: SMBIOS 2.4 present. Dec 16 13:13:19.126135 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Dec 16 13:13:19.126156 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:13:19.126171 kernel: Hypervisor detected: KVM Dec 16 13:13:19.126187 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 16 13:13:19.126202 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:13:19.126218 kernel: kvm-clock: using sched offset of 16539794765 cycles Dec 16 13:13:19.126235 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:13:19.126251 kernel: tsc: Detected 2299.998 MHz processor Dec 16 13:13:19.126267 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:13:19.126283 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:13:19.126298 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 16 13:13:19.126326 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 16 13:13:19.126341 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:13:19.126357 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 16 13:13:19.126373 kernel: Using GB pages for direct mapping Dec 16 13:13:19.126389 kernel: ACPI: Early table checksum verification disabled Dec 16 13:13:19.126412 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 16 13:13:19.126430 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 16 13:13:19.126451 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 16 13:13:19.126469 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 16 13:13:19.126486 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 16 13:13:19.126503 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Dec 16 13:13:19.126520 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 16 13:13:19.126537 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 16 13:13:19.126555 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 16 13:13:19.126575 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 16 13:13:19.126593 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 16 13:13:19.126611 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 16 13:13:19.126628 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 16 13:13:19.126645 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 16 13:13:19.126661 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 16 13:13:19.126678 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 16 13:13:19.126695 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 16 13:13:19.126710 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 16 13:13:19.126732 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 16 13:13:19.126748 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 16 13:13:19.126765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 13:13:19.126781 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 16 13:13:19.126799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 16 13:13:19.126817 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Dec 16 13:13:19.126835 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Dec 16 13:13:19.126852 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Dec 16 13:13:19.126870 kernel: Zone ranges: Dec 16 13:13:19.126892 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:13:19.126910 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:13:19.126927 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 16 13:13:19.128989 kernel: Device empty Dec 16 13:13:19.129010 kernel: Movable zone start for each node Dec 16 13:13:19.129027 kernel: Early memory node ranges Dec 16 13:13:19.129045 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 16 13:13:19.129061 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 16 13:13:19.129078 kernel: node 0: [mem 0x0000000000100000-0x00000000bd318fff] Dec 16 13:13:19.129100 kernel: node 0: [mem 0x00000000bd323000-0x00000000bf8ecfff] Dec 16 13:13:19.129122 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 16 13:13:19.129144 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 16 13:13:19.129166 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 16 13:13:19.129184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:13:19.129203 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 16 13:13:19.129221 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 16 13:13:19.129239 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 16 13:13:19.129257 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 13:13:19.129279 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 16 13:13:19.129297 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 13:13:19.129323 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:13:19.129341 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:13:19.129359 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:13:19.129378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:13:19.129396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:13:19.129414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:13:19.129432 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:13:19.129454 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:13:19.129472 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:13:19.129490 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:13:19.129507 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:13:19.129525 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:13:19.129543 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:13:19.129561 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:13:19.129579 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 16 13:13:19.129597 kernel: Booting paravirtualized kernel on KVM Dec 16 13:13:19.129616 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:13:19.129638 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:13:19.129671 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:13:19.129689 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:13:19.129706 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:13:19.129724 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:13:19.129742 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:13:19.129763 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:13:19.129781 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:13:19.129803 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:13:19.129821 kernel: Fallback order for Node 0: 0 Dec 16 13:13:19.129839 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Dec 16 13:13:19.129862 kernel: Policy zone: Normal Dec 16 13:13:19.129880 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:13:19.129899 kernel: software IO TLB: area num 2. Dec 16 13:13:19.129950 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:13:19.129973 kernel: Kernel/User page tables isolation: enabled Dec 16 13:13:19.129990 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:13:19.130008 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:13:19.130025 kernel: Dynamic Preempt: voluntary Dec 16 13:13:19.130042 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:13:19.130065 kernel: rcu: RCU event tracing is enabled. Dec 16 13:13:19.130085 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:13:19.130103 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:13:19.130120 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:13:19.130138 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:13:19.130161 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:13:19.130178 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:13:19.130195 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:13:19.130212 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:13:19.130230 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:13:19.130249 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:13:19.130267 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:13:19.130282 kernel: Console: colour dummy device 80x25 Dec 16 13:13:19.130311 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:13:19.130329 kernel: ACPI: Core revision 20240827 Dec 16 13:13:19.130346 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:13:19.130364 kernel: x2apic enabled Dec 16 13:13:19.130380 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:13:19.130396 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 16 13:13:19.130414 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:13:19.130432 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 16 13:13:19.130451 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 16 13:13:19.130470 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 16 13:13:19.130494 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:13:19.130514 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Dec 16 13:13:19.130533 kernel: Spectre V2 : Mitigation: IBRS Dec 16 13:13:19.130552 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:13:19.130572 kernel: RETBleed: Mitigation: IBRS Dec 16 13:13:19.130591 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:13:19.130610 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 16 13:13:19.130630 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:13:19.130654 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 13:13:19.130673 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:13:19.130692 kernel: active return thunk: its_return_thunk Dec 16 13:13:19.130712 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:13:19.130731 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:13:19.130751 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:13:19.130770 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:13:19.130789 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:13:19.130809 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 13:13:19.130830 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:13:19.130850 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:13:19.130869 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:13:19.130888 kernel: landlock: Up and running. Dec 16 13:13:19.130907 kernel: SELinux: Initializing. Dec 16 13:13:19.130925 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:13:19.133014 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:13:19.133037 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 16 13:13:19.133057 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 16 13:13:19.133083 kernel: signal: max sigframe size: 1776 Dec 16 13:13:19.133102 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:13:19.133123 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:13:19.133143 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:13:19.133162 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:13:19.133182 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:13:19.133201 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:13:19.133219 kernel: .... node #0, CPUs: #1 Dec 16 13:13:19.133250 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 13:13:19.133277 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 13:13:19.133296 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:13:19.133321 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 16 13:13:19.133340 kernel: Memory: 7556056K/7860544K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 298912K reserved, 0K cma-reserved) Dec 16 13:13:19.133357 kernel: devtmpfs: initialized Dec 16 13:13:19.133372 kernel: x86/mm: Memory block size: 128MB Dec 16 13:13:19.133390 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 16 13:13:19.133408 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:13:19.133430 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:13:19.133448 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:13:19.133464 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:13:19.133482 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:13:19.133499 kernel: audit: type=2000 audit(1765890793.487:1): state=initialized audit_enabled=0 res=1 Dec 16 13:13:19.133515 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:13:19.133543 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:13:19.133560 kernel: cpuidle: using governor menu Dec 16 13:13:19.133580 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:13:19.133603 kernel: dca service started, version 1.12.1 Dec 16 13:13:19.133621 kernel: PCI: Using configuration type 1 for base access Dec 16 13:13:19.133639 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:13:19.133659 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:13:19.133679 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:13:19.133699 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:13:19.133719 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:13:19.133736 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:13:19.133754 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:13:19.133777 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:13:19.133796 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 13:13:19.133815 kernel: ACPI: Interpreter enabled Dec 16 13:13:19.133834 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:13:19.133854 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:13:19.133873 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:13:19.133892 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:13:19.133911 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 16 13:13:19.133947 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:13:19.134236 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:13:19.134434 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 13:13:19.134617 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 13:13:19.134640 kernel: PCI host bridge to bus 0000:00 Dec 16 13:13:19.134819 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:13:19.137130 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:13:19.137354 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:13:19.137534 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 16 13:13:19.137702 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:13:19.137903 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:13:19.139137 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:13:19.139352 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 16 13:13:19.139538 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 13:13:19.139735 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Dec 16 13:13:19.139919 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 16 13:13:19.140134 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Dec 16 13:13:19.140331 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:13:19.140519 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Dec 16 13:13:19.140718 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Dec 16 13:13:19.140948 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 13:13:19.141132 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Dec 16 13:13:19.141313 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Dec 16 13:13:19.141335 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:13:19.141353 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:13:19.141372 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:13:19.141389 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:13:19.141406 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 13:13:19.141429 kernel: iommu: Default domain type: Translated Dec 16 13:13:19.141447 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:13:19.141464 kernel: efivars: Registered efivars operations Dec 16 13:13:19.141480 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:13:19.141508 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:13:19.141525 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 16 13:13:19.141542 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 16 13:13:19.141560 kernel: e820: reserve RAM buffer [mem 0xbd319000-0xbfffffff] Dec 16 13:13:19.141577 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 16 13:13:19.143117 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 16 13:13:19.143142 kernel: vgaarb: loaded Dec 16 13:13:19.143163 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:13:19.143182 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:13:19.143201 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:13:19.143220 kernel: pnp: PnP ACPI init Dec 16 13:13:19.143237 kernel: pnp: PnP ACPI: found 7 devices Dec 16 13:13:19.143256 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:13:19.143275 kernel: NET: Registered PF_INET protocol family Dec 16 13:13:19.143308 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:13:19.143327 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:13:19.143347 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:13:19.143366 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:13:19.143385 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:13:19.143404 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:13:19.143423 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:13:19.143443 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:13:19.143461 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:13:19.143484 kernel: NET: Registered PF_XDP protocol family Dec 16 13:13:19.143687 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:13:19.143860 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:13:19.144145 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:13:19.144341 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 16 13:13:19.144547 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 13:13:19.144573 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:13:19.144598 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:13:19.144617 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 16 13:13:19.144636 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:13:19.144661 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:13:19.144681 kernel: clocksource: Switched to clocksource tsc Dec 16 13:13:19.144699 kernel: Initialise system trusted keyrings Dec 16 13:13:19.144718 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:13:19.144736 kernel: Key type asymmetric registered Dec 16 13:13:19.144753 kernel: Asymmetric key parser 'x509' registered Dec 16 13:13:19.144775 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:13:19.144793 kernel: io scheduler mq-deadline registered Dec 16 13:13:19.144810 kernel: io scheduler kyber registered Dec 16 13:13:19.144828 kernel: io scheduler bfq registered Dec 16 13:13:19.144846 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:13:19.144865 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 16 13:13:19.145187 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 16 13:13:19.145216 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 16 13:13:19.145417 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 16 13:13:19.145448 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 16 13:13:19.145636 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 16 13:13:19.145660 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:13:19.145680 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:13:19.145699 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:13:19.145718 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 16 13:13:19.145737 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 16 13:13:19.145945 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 16 13:13:19.146058 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:13:19.146077 kernel: i8042: Warning: Keylock active Dec 16 13:13:19.146095 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:13:19.146113 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:13:19.146336 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 13:13:19.146533 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 13:13:19.146714 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T13:13:18 UTC (1765890798) Dec 16 13:13:19.146894 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 13:13:19.146924 kernel: intel_pstate: CPU model not supported Dec 16 13:13:19.146968 kernel: pstore: Using crash dump compression: deflate Dec 16 13:13:19.146989 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:13:19.147008 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:13:19.147027 kernel: Segment Routing with IPv6 Dec 16 13:13:19.147045 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:13:19.147063 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:13:19.147081 kernel: Key type dns_resolver registered Dec 16 13:13:19.147100 kernel: IPI shorthand broadcast: enabled Dec 16 13:13:19.147125 kernel: sched_clock: Marking stable (3889005149, 958818735)->(5186101105, -338277221) Dec 16 13:13:19.147144 kernel: registered taskstats version 1 Dec 16 13:13:19.147163 kernel: Loading compiled-in X.509 certificates Dec 16 13:13:19.147182 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:13:19.147200 kernel: Demotion targets for Node 0: null Dec 16 13:13:19.147219 kernel: Key type .fscrypt registered Dec 16 13:13:19.147237 kernel: Key type fscrypt-provisioning registered Dec 16 13:13:19.147256 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:13:19.147274 kernel: ima: No architecture policies found Dec 16 13:13:19.147297 kernel: clk: Disabling unused clocks Dec 16 13:13:19.147323 kernel: Warning: unable to open an initial console. Dec 16 13:13:19.147342 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:13:19.147361 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 16 13:13:19.147380 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:13:19.147399 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:13:19.147418 kernel: Run /init as init process Dec 16 13:13:19.147437 kernel: with arguments: Dec 16 13:13:19.147455 kernel: /init Dec 16 13:13:19.147477 kernel: with environment: Dec 16 13:13:19.147496 kernel: HOME=/ Dec 16 13:13:19.147516 kernel: TERM=linux Dec 16 13:13:19.147537 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:13:19.147562 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:13:19.147583 systemd[1]: Detected virtualization google. Dec 16 13:13:19.147603 systemd[1]: Detected architecture x86-64. Dec 16 13:13:19.147625 systemd[1]: Running in initrd. Dec 16 13:13:19.147644 systemd[1]: No hostname configured, using default hostname. Dec 16 13:13:19.147665 systemd[1]: Hostname set to . Dec 16 13:13:19.147685 systemd[1]: Initializing machine ID from random generator. Dec 16 13:13:19.147705 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:13:19.147725 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:13:19.147764 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:13:19.147789 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:13:19.147810 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:13:19.147832 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:13:19.147854 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:13:19.147877 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:13:19.147902 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:13:19.147923 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:13:19.147970 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:13:19.147991 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:13:19.148012 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:13:19.148033 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:13:19.148053 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:13:19.148074 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:13:19.148094 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:13:19.148120 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:13:19.148141 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:13:19.148162 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:13:19.148182 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:13:19.148203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:13:19.148224 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:13:19.148245 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:13:19.148266 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:13:19.148286 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:13:19.148320 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:13:19.148342 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:13:19.148362 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:13:19.148383 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:13:19.148403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:19.148424 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:13:19.148450 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:13:19.148471 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:13:19.148531 systemd-journald[191]: Collecting audit messages is disabled. Dec 16 13:13:19.148581 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:13:19.148607 systemd-journald[191]: Journal started Dec 16 13:13:19.148651 systemd-journald[191]: Runtime Journal (/run/log/journal/8c6450258ff24af8922d9debc2b8b9df) is 8M, max 148.6M, 140.6M free. Dec 16 13:13:19.134989 systemd-modules-load[193]: Inserted module 'overlay' Dec 16 13:13:19.153946 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:13:19.161184 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:13:19.168263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:19.185964 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:13:19.187515 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:13:19.195170 kernel: Bridge firewalling registered Dec 16 13:13:19.189819 systemd-modules-load[193]: Inserted module 'br_netfilter' Dec 16 13:13:19.193465 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:13:19.195872 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:13:19.203113 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:13:19.210224 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:13:19.216822 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:13:19.228373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:13:19.250128 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:19.255125 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:13:19.260477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:13:19.268720 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:13:19.278138 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:13:19.313915 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:13:19.323805 systemd-resolved[228]: Positive Trust Anchors: Dec 16 13:13:19.324378 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:13:19.324585 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:13:19.331154 systemd-resolved[228]: Defaulting to hostname 'linux'. Dec 16 13:13:19.332994 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:13:19.344199 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:13:19.433986 kernel: SCSI subsystem initialized Dec 16 13:13:19.445971 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:13:19.458975 kernel: iscsi: registered transport (tcp) Dec 16 13:13:19.483974 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:13:19.484068 kernel: QLogic iSCSI HBA Driver Dec 16 13:13:19.507775 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:13:19.528565 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:13:19.532898 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:13:19.595236 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:13:19.603744 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:13:19.662979 kernel: raid6: avx2x4 gen() 17894 MB/s Dec 16 13:13:19.679967 kernel: raid6: avx2x2 gen() 17847 MB/s Dec 16 13:13:19.697446 kernel: raid6: avx2x1 gen() 13961 MB/s Dec 16 13:13:19.697508 kernel: raid6: using algorithm avx2x4 gen() 17894 MB/s Dec 16 13:13:19.715361 kernel: raid6: .... xor() 7540 MB/s, rmw enabled Dec 16 13:13:19.715419 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:13:19.738979 kernel: xor: automatically using best checksumming function avx Dec 16 13:13:19.923977 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:13:19.932318 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:13:19.935753 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:13:19.969463 systemd-udevd[440]: Using default interface naming scheme 'v255'. Dec 16 13:13:19.978513 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:13:19.983106 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:13:20.018454 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Dec 16 13:13:20.052218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:13:20.054520 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:13:20.147385 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:13:20.152681 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:13:20.254674 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Dec 16 13:13:20.265211 kernel: scsi host0: Virtio SCSI HBA Dec 16 13:13:20.271970 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 16 13:13:20.279993 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:13:20.312433 kernel: AES CTR mode by8 optimization enabled Dec 16 13:13:20.312507 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 13:13:20.378590 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:13:20.378825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:20.405407 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Dec 16 13:13:20.405720 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 16 13:13:20.403193 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:20.412514 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 16 13:13:20.413586 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 16 13:13:20.413838 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 13:13:20.416516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:20.422775 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:13:20.422837 kernel: GPT:17805311 != 33554431 Dec 16 13:13:20.422872 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:13:20.422895 kernel: GPT:17805311 != 33554431 Dec 16 13:13:20.422917 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:13:20.422957 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:13:20.424953 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 16 13:13:20.425573 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:13:20.464910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:20.526073 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 16 13:13:20.527270 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:13:20.553562 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 16 13:13:20.567094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 16 13:13:20.578475 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 16 13:13:20.578750 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 16 13:13:20.584436 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:13:20.589246 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:13:20.594358 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:13:20.601556 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:13:20.616543 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:13:20.632386 disk-uuid[597]: Primary Header is updated. Dec 16 13:13:20.632386 disk-uuid[597]: Secondary Entries is updated. Dec 16 13:13:20.632386 disk-uuid[597]: Secondary Header is updated. Dec 16 13:13:20.649609 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:13:20.655114 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:13:21.682908 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:13:21.683016 disk-uuid[598]: The operation has completed successfully. Dec 16 13:13:21.756821 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:13:21.757000 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:13:21.810482 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:13:21.830463 sh[619]: Success Dec 16 13:13:21.854277 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:13:21.854382 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:13:21.854412 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:13:21.867975 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 16 13:13:21.951291 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:13:21.956062 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:13:21.974909 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:13:21.993997 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (631) Dec 16 13:13:21.996553 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:13:21.996615 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:22.024776 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:13:22.024867 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:13:22.024884 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:13:22.027845 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:13:22.031745 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:13:22.034473 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:13:22.036645 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:13:22.045661 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:13:22.087974 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (664) Dec 16 13:13:22.091404 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:22.091472 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:22.099748 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:13:22.099823 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:13:22.099850 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:13:22.108030 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:22.109243 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:13:22.115566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:13:22.222731 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:13:22.233090 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:13:22.369569 systemd-networkd[800]: lo: Link UP Dec 16 13:13:22.369588 systemd-networkd[800]: lo: Gained carrier Dec 16 13:13:22.372015 systemd-networkd[800]: Enumeration completed Dec 16 13:13:22.372166 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:13:22.372704 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:22.372711 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:13:22.386815 ignition[721]: Ignition 2.22.0 Dec 16 13:13:22.374949 systemd-networkd[800]: eth0: Link UP Dec 16 13:13:22.386827 ignition[721]: Stage: fetch-offline Dec 16 13:13:22.375187 systemd-networkd[800]: eth0: Gained carrier Dec 16 13:13:22.386879 ignition[721]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:22.375205 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:22.386894 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:13:22.378370 systemd[1]: Reached target network.target - Network. Dec 16 13:13:22.387100 ignition[721]: parsed url from cmdline: "" Dec 16 13:13:22.386052 systemd-networkd[800]: eth0: DHCPv4 address 10.128.0.75/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 16 13:13:22.387107 ignition[721]: no config URL provided Dec 16 13:13:22.391486 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:13:22.387117 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:13:22.397528 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:13:22.387134 ignition[721]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:13:22.387147 ignition[721]: failed to fetch config: resource requires networking Dec 16 13:13:22.387407 ignition[721]: Ignition finished successfully Dec 16 13:13:22.438811 ignition[810]: Ignition 2.22.0 Dec 16 13:13:22.438820 ignition[810]: Stage: fetch Dec 16 13:13:22.439073 ignition[810]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:22.439090 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:13:22.449972 unknown[810]: fetched base config from "system" Dec 16 13:13:22.439241 ignition[810]: parsed url from cmdline: "" Dec 16 13:13:22.449987 unknown[810]: fetched base config from "system" Dec 16 13:13:22.439270 ignition[810]: no config URL provided Dec 16 13:13:22.450002 unknown[810]: fetched user config from "gcp" Dec 16 13:13:22.439281 ignition[810]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:13:22.454924 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:13:22.439296 ignition[810]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:13:22.459885 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:13:22.439345 ignition[810]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 16 13:13:22.443349 ignition[810]: GET result: OK Dec 16 13:13:22.443449 ignition[810]: parsing config with SHA512: 8baaf50796f75a56d6959155e5d18128402c167bd9e115e1ec1d10399912b7caadca789b80dc89864aa23b73ea829e7b5bd42863279b59c1c655805e25e51ae7 Dec 16 13:13:22.451594 ignition[810]: fetch: fetch complete Dec 16 13:13:22.451605 ignition[810]: fetch: fetch passed Dec 16 13:13:22.451701 ignition[810]: Ignition finished successfully Dec 16 13:13:22.512564 ignition[817]: Ignition 2.22.0 Dec 16 13:13:22.512583 ignition[817]: Stage: kargs Dec 16 13:13:22.512836 ignition[817]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:22.517103 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:13:22.512855 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:13:22.522652 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:13:22.514588 ignition[817]: kargs: kargs passed Dec 16 13:13:22.514646 ignition[817]: Ignition finished successfully Dec 16 13:13:22.566017 ignition[824]: Ignition 2.22.0 Dec 16 13:13:22.566035 ignition[824]: Stage: disks Dec 16 13:13:22.569184 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:13:22.566255 ignition[824]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:22.573820 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:13:22.566272 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:13:22.577095 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:13:22.567718 ignition[824]: disks: disks passed Dec 16 13:13:22.581080 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:13:22.567774 ignition[824]: Ignition finished successfully Dec 16 13:13:22.585077 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:13:22.589057 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:13:22.594558 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:13:22.635607 systemd-fsck[833]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 16 13:13:22.645197 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:13:22.650152 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:13:22.819990 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:13:22.820818 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:13:22.824167 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:13:22.828191 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:13:22.846989 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:13:22.852887 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:13:22.853601 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:13:22.866465 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (841) Dec 16 13:13:22.853870 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:13:22.873220 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:22.873253 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:22.871430 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:13:22.880960 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:13:22.881000 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:13:22.881022 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:13:22.877397 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:13:22.886354 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:13:23.008959 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:13:23.017997 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:13:23.024838 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:13:23.030960 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:13:23.179523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:13:23.182823 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:13:23.193910 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:13:23.206171 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:13:23.209081 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:23.248998 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:13:23.258120 ignition[953]: INFO : Ignition 2.22.0 Dec 16 13:13:23.258120 ignition[953]: INFO : Stage: mount Dec 16 13:13:23.261379 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:23.261379 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:13:23.261379 ignition[953]: INFO : mount: mount passed Dec 16 13:13:23.261379 ignition[953]: INFO : Ignition finished successfully Dec 16 13:13:23.261292 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:13:23.267172 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:13:23.292155 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:13:23.323974 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (966) Dec 16 13:13:23.326802 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:23.326869 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:23.332325 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:13:23.332393 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:13:23.332418 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:13:23.335927 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:13:23.375533 ignition[983]: INFO : Ignition 2.22.0 Dec 16 13:13:23.375533 ignition[983]: INFO : Stage: files Dec 16 13:13:23.382080 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:23.382080 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:13:23.382080 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:13:23.382080 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:13:23.382080 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:13:23.395082 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:13:23.395082 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:13:23.395082 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:13:23.395082 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:13:23.395082 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:13:23.386524 unknown[983]: wrote ssh authorized keys file for user: core Dec 16 13:13:23.451127 systemd-networkd[800]: eth0: Gained IPv6LL Dec 16 13:13:23.802333 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:13:23.959201 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:13:23.964094 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:13:23.964094 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:13:23.964094 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:13:23.964094 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:13:23.964094 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:13:23.964094 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:13:23.964094 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:13:23.964094 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:13:23.996064 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:13:23.996064 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:13:23.996064 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:13:23.996064 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:13:23.996064 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:13:23.996064 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:13:24.383050 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:13:24.769185 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:13:24.769185 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:13:24.777082 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:13:24.777082 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:13:24.777082 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:13:24.777082 ignition[983]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:13:24.777082 ignition[983]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:13:24.777082 ignition[983]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:13:24.777082 ignition[983]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:13:24.777082 ignition[983]: INFO : files: files passed Dec 16 13:13:24.777082 ignition[983]: INFO : Ignition finished successfully Dec 16 13:13:24.780590 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:13:24.784249 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:13:24.795812 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:13:24.818265 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:13:24.819173 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:13:24.827088 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:13:24.827088 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:13:24.838048 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:13:24.830840 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:13:24.833761 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:13:24.839974 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:13:24.915682 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:13:24.916121 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:13:24.921672 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:13:24.925271 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:13:24.929481 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:13:24.931968 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:13:24.966189 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:13:24.969045 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:13:24.996840 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:13:25.001244 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:13:25.004272 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:13:25.010414 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:13:25.010653 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:13:25.019378 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:13:25.022411 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:13:25.026670 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:13:25.030361 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:13:25.034482 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:13:25.038570 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:13:25.042487 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:13:25.046720 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:13:25.050719 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:13:25.055903 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:13:25.059620 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:13:25.063483 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:13:25.064082 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:13:25.071336 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:13:25.074626 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:13:25.078408 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:13:25.078592 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:13:25.083463 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:13:25.083905 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:13:25.090390 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:13:25.091171 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:13:25.093699 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:13:25.094265 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:13:25.099479 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:13:25.105288 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:13:25.105500 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:13:25.117721 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:13:25.127079 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:13:25.127357 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:13:25.130859 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:13:25.131392 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:13:25.147727 ignition[1036]: INFO : Ignition 2.22.0 Dec 16 13:13:25.147727 ignition[1036]: INFO : Stage: umount Dec 16 13:13:25.152067 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:25.152067 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:13:25.152067 ignition[1036]: INFO : umount: umount passed Dec 16 13:13:25.152067 ignition[1036]: INFO : Ignition finished successfully Dec 16 13:13:25.154401 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:13:25.154580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:13:25.158548 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:13:25.158713 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:13:25.163490 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:13:25.163632 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:13:25.167186 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:13:25.167279 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:13:25.167563 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:13:25.167750 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:13:25.172275 systemd[1]: Stopped target network.target - Network. Dec 16 13:13:25.183230 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:13:25.183315 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:13:25.186533 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:13:25.190222 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:13:25.190412 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:13:25.194270 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:13:25.198229 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:13:25.202273 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:13:25.202482 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:13:25.206349 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:13:25.206563 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:13:25.210271 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:13:25.210470 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:13:25.214374 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:13:25.214465 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:13:25.218917 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:13:25.223629 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:13:25.235681 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:13:25.236561 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:13:25.236687 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:13:25.245057 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:13:25.245312 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:13:25.245438 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:13:25.251011 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:13:25.251320 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:13:25.251430 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:13:25.255969 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:13:25.265173 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:13:25.265248 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:13:25.271113 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:13:25.271219 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:13:25.278414 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:13:25.286050 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:13:25.286159 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:13:25.289125 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:13:25.289208 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:25.292646 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:13:25.292731 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:13:25.296303 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:13:25.296492 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:13:25.300685 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:13:25.311183 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:13:25.311271 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:13:25.322560 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:13:25.322844 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:13:25.326219 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:13:25.326369 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:13:25.333313 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:13:25.333436 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:13:25.336258 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:13:25.336434 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:13:25.339291 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:13:25.339398 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:13:25.351144 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:13:25.351245 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:13:25.358342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:13:25.358544 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:13:25.365920 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:13:25.377309 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:13:25.377406 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:13:25.381521 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:13:25.381700 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:13:25.395404 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:13:25.395492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:25.471134 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Dec 16 13:13:25.402905 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:13:25.403005 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:13:25.403060 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:13:25.403582 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:13:25.403692 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:13:25.408724 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:13:25.412623 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:13:25.434383 systemd[1]: Switching root. Dec 16 13:13:25.495049 systemd-journald[191]: Journal stopped Dec 16 13:13:27.402073 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:13:27.402131 kernel: SELinux: policy capability open_perms=1 Dec 16 13:13:27.402159 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:13:27.402178 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:13:27.402196 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:13:27.402214 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:13:27.402236 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:13:27.402255 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:13:27.402278 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:13:27.402298 kernel: audit: type=1403 audit(1765890806.065:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:13:27.402321 systemd[1]: Successfully loaded SELinux policy in 69.266ms. Dec 16 13:13:27.402345 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.352ms. Dec 16 13:13:27.402368 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:13:27.402390 systemd[1]: Detected virtualization google. Dec 16 13:13:27.402417 systemd[1]: Detected architecture x86-64. Dec 16 13:13:27.402439 systemd[1]: Detected first boot. Dec 16 13:13:27.402468 systemd[1]: Initializing machine ID from random generator. Dec 16 13:13:27.402498 zram_generator::config[1078]: No configuration found. Dec 16 13:13:27.402521 kernel: Guest personality initialized and is inactive Dec 16 13:13:27.402551 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:13:27.402581 kernel: Initialized host personality Dec 16 13:13:27.402600 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:13:27.402621 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:13:27.402643 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:13:27.402665 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:13:27.402692 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:13:27.402713 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:13:27.402738 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:13:27.402760 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:13:27.402789 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:13:27.402817 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:13:27.402840 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:13:27.402863 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:13:27.402885 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:13:27.402911 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:13:27.402951 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:13:27.402973 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:13:27.402992 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:13:27.403010 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:13:27.403031 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:13:27.403063 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:13:27.403083 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:13:27.403103 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:13:27.403129 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:13:27.403152 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:13:27.403173 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:13:27.403194 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:13:27.403216 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:13:27.403237 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:13:27.403259 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:13:27.403287 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:13:27.403310 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:13:27.403332 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:13:27.403354 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:13:27.403398 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:13:27.403420 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:13:27.403445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:13:27.403504 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:13:27.403527 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:13:27.403549 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:13:27.403570 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:13:27.403595 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:13:27.403617 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:27.403644 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:13:27.403666 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:13:27.403688 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:13:27.403711 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:13:27.403733 systemd[1]: Reached target machines.target - Containers. Dec 16 13:13:27.403755 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:13:27.403777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:27.403798 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:13:27.403824 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:13:27.403854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:13:27.403877 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:13:27.403899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:13:27.403921 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:13:27.403962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:13:27.403984 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:13:27.404005 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:13:27.404027 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:13:27.404055 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:13:27.404077 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:13:27.404101 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:27.404124 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:13:27.404165 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:13:27.404188 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:13:27.404209 kernel: loop: module loaded Dec 16 13:13:27.404231 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:13:27.404258 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:13:27.404281 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:13:27.404303 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:13:27.404325 systemd[1]: Stopped verity-setup.service. Dec 16 13:13:27.404348 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:27.404371 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:13:27.404393 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:13:27.404415 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:13:27.404442 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:13:27.404534 systemd-journald[1152]: Collecting audit messages is disabled. Dec 16 13:13:27.404596 kernel: fuse: init (API version 7.41) Dec 16 13:13:27.404618 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:13:27.404644 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:13:27.404674 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:13:27.404698 systemd-journald[1152]: Journal started Dec 16 13:13:27.404740 systemd-journald[1152]: Runtime Journal (/run/log/journal/d65dd18265b148dda48f433a07c11438) is 8M, max 148.6M, 140.6M free. Dec 16 13:13:26.951366 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:13:26.976924 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 13:13:26.977644 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:13:27.418440 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:13:27.415676 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:13:27.416975 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:13:27.422060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:13:27.424174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:13:27.428242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:13:27.431342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:13:27.434878 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:13:27.435222 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:13:27.438627 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:13:27.440023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:13:27.444036 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:13:27.448132 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:13:27.452525 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:13:27.486064 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:13:27.491899 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:13:27.512892 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:13:27.520449 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:13:27.526075 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:13:27.528086 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:13:27.528143 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:13:27.532900 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:13:27.545239 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:13:27.548512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:27.557291 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:13:27.563241 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:13:27.566110 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:13:27.570115 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:13:27.574137 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:13:27.579295 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:13:27.585811 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:13:27.594562 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:13:27.601855 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:13:27.606293 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:13:27.646093 kernel: loop0: detected capacity change from 0 to 50736 Dec 16 13:13:27.656867 systemd-journald[1152]: Time spent on flushing to /var/log/journal/d65dd18265b148dda48f433a07c11438 is 159.776ms for 956 entries. Dec 16 13:13:27.656867 systemd-journald[1152]: System Journal (/var/log/journal/d65dd18265b148dda48f433a07c11438) is 8M, max 584.8M, 576.8M free. Dec 16 13:13:27.876256 systemd-journald[1152]: Received client request to flush runtime journal. Dec 16 13:13:27.876370 kernel: ACPI: bus type drm_connector registered Dec 16 13:13:27.876418 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:13:27.876455 kernel: loop1: detected capacity change from 0 to 219144 Dec 16 13:13:27.686748 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:13:27.690796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:27.698443 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:13:27.708287 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:13:27.714927 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:13:27.715259 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:13:27.804056 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:13:27.823655 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:13:27.828750 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:13:27.837892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:13:27.882527 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:13:27.897141 kernel: loop2: detected capacity change from 0 to 110984 Dec 16 13:13:27.923598 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Dec 16 13:13:27.924410 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Dec 16 13:13:27.936252 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:13:27.970012 kernel: loop3: detected capacity change from 0 to 128560 Dec 16 13:13:27.980222 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:13:28.049980 kernel: loop4: detected capacity change from 0 to 50736 Dec 16 13:13:28.074311 kernel: loop5: detected capacity change from 0 to 219144 Dec 16 13:13:28.113980 kernel: loop6: detected capacity change from 0 to 110984 Dec 16 13:13:28.151012 kernel: loop7: detected capacity change from 0 to 128560 Dec 16 13:13:28.184765 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Dec 16 13:13:28.186705 (sd-merge)[1224]: Merged extensions into '/usr'. Dec 16 13:13:28.199774 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:13:28.199993 systemd[1]: Reloading... Dec 16 13:13:28.417029 zram_generator::config[1257]: No configuration found. Dec 16 13:13:28.790811 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:13:28.915829 systemd[1]: Reloading finished in 714 ms. Dec 16 13:13:28.937204 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:13:28.940762 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:13:28.956807 systemd[1]: Starting ensure-sysext.service... Dec 16 13:13:28.964183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:13:29.004561 systemd[1]: Reload requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:13:29.004582 systemd[1]: Reloading... Dec 16 13:13:29.046608 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:13:29.048525 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:13:29.049580 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:13:29.050342 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:13:29.057861 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:13:29.058668 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Dec 16 13:13:29.059029 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Dec 16 13:13:29.074548 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:13:29.075104 systemd-tmpfiles[1291]: Skipping /boot Dec 16 13:13:29.097561 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:13:29.097592 systemd-tmpfiles[1291]: Skipping /boot Dec 16 13:13:29.178966 zram_generator::config[1325]: No configuration found. Dec 16 13:13:29.408927 systemd[1]: Reloading finished in 403 ms. Dec 16 13:13:29.434497 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:13:29.452259 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:13:29.465959 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:13:29.472642 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:13:29.478260 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:13:29.490072 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:13:29.496189 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:13:29.503066 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:13:29.513357 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:29.514112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:29.517627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:13:29.529397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:13:29.540991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:13:29.544213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:29.545227 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:29.545425 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:29.556390 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:13:29.561114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:29.561435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:29.561712 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:29.561858 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:29.562045 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:29.575522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:29.576493 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:29.580471 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:13:29.586803 systemd-udevd[1365]: Using default interface naming scheme 'v255'. Dec 16 13:13:29.589070 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 13:13:29.592236 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:29.593087 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:29.593428 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:13:29.596263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:29.613845 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:13:29.621326 systemd[1]: Finished ensure-sysext.service. Dec 16 13:13:29.625746 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:13:29.632271 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:13:29.690887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:13:29.692363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:13:29.696598 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:13:29.697732 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:13:29.698947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:13:29.702737 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:13:29.703686 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:13:29.709865 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 13:13:29.724263 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 16 13:13:29.727263 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:13:29.727508 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:13:29.732169 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:13:29.745256 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:13:29.752033 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:13:29.755183 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:13:29.774396 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:13:29.780242 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:13:29.811304 augenrules[1434]: No rules Dec 16 13:13:29.819034 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:13:29.820751 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:13:29.835044 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:13:29.903551 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 16 13:13:29.973227 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Dec 16 13:13:29.974483 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Dec 16 13:13:30.025177 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:13:30.162295 systemd-networkd[1412]: lo: Link UP Dec 16 13:13:30.162309 systemd-networkd[1412]: lo: Gained carrier Dec 16 13:13:30.164154 systemd-networkd[1412]: Enumeration completed Dec 16 13:13:30.164307 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:13:30.171410 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:13:30.178128 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:13:30.215693 systemd-resolved[1364]: Positive Trust Anchors: Dec 16 13:13:30.215719 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:13:30.215789 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:13:30.223717 systemd-resolved[1364]: Defaulting to hostname 'linux'. Dec 16 13:13:30.226075 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:13:30.230196 systemd[1]: Reached target network.target - Network. Dec 16 13:13:30.233065 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:13:30.237102 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:13:30.242193 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:13:30.246125 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:13:30.248966 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:13:30.250096 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:13:30.254382 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:13:30.258334 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:13:30.262090 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:13:30.266104 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:13:30.266173 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:13:30.270164 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:13:30.276087 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:13:30.283803 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:13:30.293480 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:13:30.298762 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:13:30.303099 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:13:30.317863 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:13:30.335978 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 13:13:30.342378 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:13:30.351959 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:13:30.360236 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:13:30.387414 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 13:13:30.387873 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 16 13:13:30.387883 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:13:30.396062 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 13:13:30.409337 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:30.409359 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:13:30.412895 systemd-networkd[1412]: eth0: Link UP Dec 16 13:13:30.413227 systemd-networkd[1412]: eth0: Gained carrier Dec 16 13:13:30.413264 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:30.429025 systemd-networkd[1412]: eth0: DHCPv4 address 10.128.0.75/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 16 13:13:30.455016 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:13:30.464140 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:13:30.473198 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:13:30.473260 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:13:30.475169 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:13:30.488910 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:13:30.500997 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:13:30.510744 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:13:30.524151 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:13:30.534881 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:13:30.545104 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:13:30.559911 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:13:30.567188 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:13:30.579258 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:13:30.599322 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:13:30.616246 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:13:30.629327 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:13:30.638090 jq[1490]: false Dec 16 13:13:30.642812 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:13:30.662279 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:13:30.665324 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing passwd entry cache Dec 16 13:13:30.665861 oslogin_cache_refresh[1492]: Refreshing passwd entry cache Dec 16 13:13:30.673638 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 16 13:13:30.675699 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:13:30.678271 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:13:30.688127 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:13:30.701023 extend-filesystems[1491]: Found /dev/sda6 Dec 16 13:13:30.697279 oslogin_cache_refresh[1492]: Failure getting users, quitting Dec 16 13:13:30.713508 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting users, quitting Dec 16 13:13:30.713508 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:13:30.713508 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing group entry cache Dec 16 13:13:30.704692 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:13:30.697328 oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:13:30.697430 oslogin_cache_refresh[1492]: Refreshing group entry cache Dec 16 13:13:30.723750 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:13:30.725031 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:13:30.728166 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:13:30.728544 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:13:30.737646 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting groups, quitting Dec 16 13:13:30.737646 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:13:30.735152 oslogin_cache_refresh[1492]: Failure getting groups, quitting Dec 16 13:13:30.735174 oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:13:30.741288 extend-filesystems[1491]: Found /dev/sda9 Dec 16 13:13:30.739772 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:13:30.763403 extend-filesystems[1491]: Checking size of /dev/sda9 Dec 16 13:13:30.741183 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:13:30.780560 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:13:30.850643 jq[1507]: true Dec 16 13:13:30.856391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:30.881973 extend-filesystems[1491]: Resized partition /dev/sda9 Dec 16 13:13:30.891164 extend-filesystems[1537]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:13:30.908519 update_engine[1506]: I20251216 13:13:30.907904 1506 main.cc:92] Flatcar Update Engine starting Dec 16 13:13:30.933059 jq[1533]: true Dec 16 13:13:30.931896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 16 13:13:30.967381 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Dec 16 13:13:30.967466 coreos-metadata[1487]: Dec 16 13:13:30.956 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 16 13:13:30.967466 coreos-metadata[1487]: Dec 16 13:13:30.964 INFO Fetch successful Dec 16 13:13:30.967466 coreos-metadata[1487]: Dec 16 13:13:30.964 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 16 13:13:30.969957 coreos-metadata[1487]: Dec 16 13:13:30.968 INFO Fetch successful Dec 16 13:13:30.969957 coreos-metadata[1487]: Dec 16 13:13:30.968 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 16 13:13:30.970413 coreos-metadata[1487]: Dec 16 13:13:30.970 INFO Fetch successful Dec 16 13:13:30.970413 coreos-metadata[1487]: Dec 16 13:13:30.970 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 16 13:13:30.972222 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:13:30.976209 coreos-metadata[1487]: Dec 16 13:13:30.974 INFO Fetch successful Dec 16 13:13:31.000843 tar[1511]: linux-amd64/LICENSE Dec 16 13:13:31.000843 tar[1511]: linux-amd64/helm Dec 16 13:13:31.033231 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:13:31.033610 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:13:31.058135 ntpd[1498]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:31.059169 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:31.059169 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:31.059169 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: ---------------------------------------------------- Dec 16 13:13:31.059169 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:31.059169 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:31.059169 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: corporation. Support and training for ntp-4 are Dec 16 13:13:31.059169 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: available at https://www.nwtime.org/support Dec 16 13:13:31.059169 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: ---------------------------------------------------- Dec 16 13:13:31.058231 ntpd[1498]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:31.058247 ntpd[1498]: ---------------------------------------------------- Dec 16 13:13:31.058259 ntpd[1498]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:31.058274 ntpd[1498]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:31.058287 ntpd[1498]: corporation. Support and training for ntp-4 are Dec 16 13:13:31.058300 ntpd[1498]: available at https://www.nwtime.org/support Dec 16 13:13:31.058313 ntpd[1498]: ---------------------------------------------------- Dec 16 13:13:31.081859 ntpd[1498]: proto: precision = 0.107 usec (-23) Dec 16 13:13:31.083823 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: proto: precision = 0.107 usec (-23) Dec 16 13:13:31.087100 ntpd[1498]: basedate set to 2025-11-30 Dec 16 13:13:31.088873 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: basedate set to 2025-11-30 Dec 16 13:13:31.088873 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:31.088873 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:31.088873 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:31.088873 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:31.088873 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: Listen normally on 3 eth0 10.128.0.75:123 Dec 16 13:13:31.088873 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:31.088873 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: bind(21) AF_INET6 [fe80::4001:aff:fe80:4b%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:31.088873 ntpd[1498]: 16 Dec 13:13:31 ntpd[1498]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:4b%2]:123 Dec 16 13:13:31.087149 ntpd[1498]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:31.087329 ntpd[1498]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:31.087369 ntpd[1498]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:31.087621 ntpd[1498]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:31.087659 ntpd[1498]: Listen normally on 3 eth0 10.128.0.75:123 Dec 16 13:13:31.087700 ntpd[1498]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:31.087740 ntpd[1498]: bind(21) AF_INET6 [fe80::4001:aff:fe80:4b%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:31.087767 ntpd[1498]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:4b%2]:123 Dec 16 13:13:31.101884 kernel: ntpd[1498]: segfault at 24 ip 000055e4576ebaeb sp 00007fff8155d710 error 4 in ntpd[68aeb,55e457689000+80000] likely on CPU 0 (core 0, socket 0) Dec 16 13:13:31.101990 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:13:31.154367 systemd-coredump[1554]: Process 1498 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:13:31.162224 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 16 13:13:31.168382 systemd[1]: Started systemd-coredump@0-1554-0.service - Process Core Dump (PID 1554/UID 0). Dec 16 13:13:31.176645 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:13:31.234022 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:13:31.234569 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:13:31.266733 dbus-daemon[1488]: [system] SELinux support is enabled Dec 16 13:13:31.267269 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:13:31.306346 dbus-daemon[1488]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1412 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:13:31.331111 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Dec 16 13:13:31.328223 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:13:31.334512 dbus-daemon[1488]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:13:31.362226 update_engine[1506]: I20251216 13:13:31.331445 1506 update_check_scheduler.cc:74] Next update check in 3m19s Dec 16 13:13:31.328817 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:13:31.328859 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:13:31.329016 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:13:31.329048 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:13:31.366616 bash[1569]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:13:31.366854 extend-filesystems[1537]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 13:13:31.366854 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 13:13:31.366854 extend-filesystems[1537]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Dec 16 13:13:31.367198 extend-filesystems[1491]: Resized filesystem in /dev/sda9 Dec 16 13:13:31.367637 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:13:31.368314 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:13:31.369543 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:13:31.416609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:31.427043 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:13:31.451464 systemd-networkd[1412]: eth0: Gained IPv6LL Dec 16 13:13:31.452234 systemd[1]: Starting sshkeys.service... Dec 16 13:13:31.466338 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:13:31.476159 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:13:31.496707 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:13:31.503014 sshd_keygen[1527]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:13:31.516474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:31.530423 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:13:31.542006 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 16 13:13:31.564282 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:13:31.577391 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:13:31.612691 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:13:31.652031 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:13:31.718967 init.sh[1593]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 16 13:13:31.718967 init.sh[1593]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 16 13:13:31.724835 init.sh[1593]: + /usr/bin/google_instance_setup Dec 16 13:13:31.731272 coreos-metadata[1597]: Dec 16 13:13:31.727 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 16 13:13:31.738548 coreos-metadata[1597]: Dec 16 13:13:31.737 INFO Fetch failed with 404: resource not found Dec 16 13:13:31.738548 coreos-metadata[1597]: Dec 16 13:13:31.737 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 16 13:13:31.738548 coreos-metadata[1597]: Dec 16 13:13:31.737 INFO Fetch successful Dec 16 13:13:31.738548 coreos-metadata[1597]: Dec 16 13:13:31.737 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 16 13:13:31.738548 coreos-metadata[1597]: Dec 16 13:13:31.737 INFO Fetch failed with 404: resource not found Dec 16 13:13:31.738548 coreos-metadata[1597]: Dec 16 13:13:31.737 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 16 13:13:31.738548 coreos-metadata[1597]: Dec 16 13:13:31.737 INFO Fetch failed with 404: resource not found Dec 16 13:13:31.738548 coreos-metadata[1597]: Dec 16 13:13:31.737 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 16 13:13:31.738548 coreos-metadata[1597]: Dec 16 13:13:31.737 INFO Fetch successful Dec 16 13:13:31.739419 unknown[1597]: wrote ssh authorized keys file for user: core Dec 16 13:13:31.746135 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:13:31.747690 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:13:31.757508 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:13:31.777012 locksmithd[1573]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:13:31.785691 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:13:31.867866 systemd-logind[1505]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:13:31.873775 update-ssh-keys[1617]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:13:31.876574 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:13:31.877413 systemd-logind[1505]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 13:13:31.877448 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:13:31.877748 systemd-logind[1505]: New seat seat0. Dec 16 13:13:31.887596 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:13:31.896891 systemd[1]: Finished sshkeys.service. Dec 16 13:13:31.927186 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:13:31.945333 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:13:31.958033 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:13:31.967974 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:13:32.082632 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:13:32.086578 dbus-daemon[1488]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:13:32.090196 dbus-daemon[1488]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1581 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:13:32.097696 systemd-coredump[1561]: Process 1498 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1498: #0 0x000055e4576ebaeb n/a (ntpd + 0x68aeb) #1 0x000055e457694cdf n/a (ntpd + 0x11cdf) #2 0x000055e457695575 n/a (ntpd + 0x12575) #3 0x000055e457690d8a n/a (ntpd + 0xdd8a) #4 0x000055e4576925d3 n/a (ntpd + 0xf5d3) #5 0x000055e45769afd1 n/a (ntpd + 0x17fd1) #6 0x000055e45768bc2d n/a (ntpd + 0x8c2d) #7 0x00007f41a18e216c n/a (libc.so.6 + 0x2716c) #8 0x00007f41a18e2229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055e45768bc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:13:32.103928 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:13:32.111988 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:13:32.112260 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:13:32.117657 systemd[1]: systemd-coredump@0-1554-0.service: Deactivated successfully. Dec 16 13:13:32.134786 containerd[1515]: time="2025-12-16T13:13:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:13:32.142070 containerd[1515]: time="2025-12-16T13:13:32.136803684Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.168220084Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.574µs" Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.168266880Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.168294757Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.168505004Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.168527298Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.168565541Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.168684642Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.168704875Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.169102234Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.169129812Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.169150951Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:13:32.170003 containerd[1515]: time="2025-12-16T13:13:32.169165564Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:13:32.170625 containerd[1515]: time="2025-12-16T13:13:32.169285869Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:13:32.170625 containerd[1515]: time="2025-12-16T13:13:32.169648836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:13:32.170625 containerd[1515]: time="2025-12-16T13:13:32.169700706Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:13:32.170625 containerd[1515]: time="2025-12-16T13:13:32.169727148Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:13:32.170625 containerd[1515]: time="2025-12-16T13:13:32.169800939Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:13:32.170625 containerd[1515]: time="2025-12-16T13:13:32.170330827Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:13:32.170625 containerd[1515]: time="2025-12-16T13:13:32.170429694Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:13:32.185531 containerd[1515]: time="2025-12-16T13:13:32.185478269Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:13:32.189140 containerd[1515]: time="2025-12-16T13:13:32.188855353Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:13:32.189140 containerd[1515]: time="2025-12-16T13:13:32.188926435Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:13:32.189140 containerd[1515]: time="2025-12-16T13:13:32.189005250Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:13:32.189140 containerd[1515]: time="2025-12-16T13:13:32.189068231Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:13:32.189140 containerd[1515]: time="2025-12-16T13:13:32.189090595Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:13:32.191687 containerd[1515]: time="2025-12-16T13:13:32.189111893Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:13:32.191687 containerd[1515]: time="2025-12-16T13:13:32.191031245Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:13:32.191687 containerd[1515]: time="2025-12-16T13:13:32.191061853Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:13:32.191687 containerd[1515]: time="2025-12-16T13:13:32.191099535Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:13:32.191687 containerd[1515]: time="2025-12-16T13:13:32.191122012Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:13:32.191687 containerd[1515]: time="2025-12-16T13:13:32.191144634Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:13:32.192961 containerd[1515]: time="2025-12-16T13:13:32.192803407Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:13:32.192961 containerd[1515]: time="2025-12-16T13:13:32.192875856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:13:32.193350 containerd[1515]: time="2025-12-16T13:13:32.192923422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:13:32.193350 containerd[1515]: time="2025-12-16T13:13:32.193292093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:13:32.193350 containerd[1515]: time="2025-12-16T13:13:32.193319366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:13:32.193699 containerd[1515]: time="2025-12-16T13:13:32.193537271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:13:32.193699 containerd[1515]: time="2025-12-16T13:13:32.193648422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:13:32.193699 containerd[1515]: time="2025-12-16T13:13:32.193671982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:13:32.195309 containerd[1515]: time="2025-12-16T13:13:32.195070785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:13:32.195309 containerd[1515]: time="2025-12-16T13:13:32.195112442Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:13:32.195309 containerd[1515]: time="2025-12-16T13:13:32.195159316Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:13:32.195309 containerd[1515]: time="2025-12-16T13:13:32.195252692Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:13:32.195309 containerd[1515]: time="2025-12-16T13:13:32.195273978Z" level=info msg="Start snapshots syncer" Dec 16 13:13:32.195635 containerd[1515]: time="2025-12-16T13:13:32.195610023Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:13:32.197244 containerd[1515]: time="2025-12-16T13:13:32.196471550Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:13:32.197244 containerd[1515]: time="2025-12-16T13:13:32.196563935Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:13:32.198268 containerd[1515]: time="2025-12-16T13:13:32.198235094Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:13:32.198758 containerd[1515]: time="2025-12-16T13:13:32.198719345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:13:32.198902 containerd[1515]: time="2025-12-16T13:13:32.198878503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:13:32.199087 containerd[1515]: time="2025-12-16T13:13:32.199054597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:13:32.199311 containerd[1515]: time="2025-12-16T13:13:32.199244918Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:13:32.199641 containerd[1515]: time="2025-12-16T13:13:32.199460967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:13:32.199641 containerd[1515]: time="2025-12-16T13:13:32.199609249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:13:32.199853 containerd[1515]: time="2025-12-16T13:13:32.199791800Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:13:32.200009 containerd[1515]: time="2025-12-16T13:13:32.199927159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:13:32.200169 containerd[1515]: time="2025-12-16T13:13:32.199983298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:13:32.200169 containerd[1515]: time="2025-12-16T13:13:32.200113716Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:13:32.200353 containerd[1515]: time="2025-12-16T13:13:32.200329922Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:13:32.200611 containerd[1515]: time="2025-12-16T13:13:32.200554024Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:13:32.200611 containerd[1515]: time="2025-12-16T13:13:32.200579770Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:13:32.200917 containerd[1515]: time="2025-12-16T13:13:32.200724150Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:13:32.200917 containerd[1515]: time="2025-12-16T13:13:32.200844443Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:13:32.200917 containerd[1515]: time="2025-12-16T13:13:32.200871366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:13:32.202341 containerd[1515]: time="2025-12-16T13:13:32.201162829Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:13:32.202873 containerd[1515]: time="2025-12-16T13:13:32.202482109Z" level=info msg="runtime interface created" Dec 16 13:13:32.202873 containerd[1515]: time="2025-12-16T13:13:32.202504981Z" level=info msg="created NRI interface" Dec 16 13:13:32.202873 containerd[1515]: time="2025-12-16T13:13:32.202546482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:13:32.202873 containerd[1515]: time="2025-12-16T13:13:32.202576544Z" level=info msg="Connect containerd service" Dec 16 13:13:32.202873 containerd[1515]: time="2025-12-16T13:13:32.202648853Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:13:32.205710 containerd[1515]: time="2025-12-16T13:13:32.205671131Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:13:32.463899 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 16 13:13:32.471978 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:13:32.531059 ntpd[1646]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:32.531984 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:32.531984 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:32.531984 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: ---------------------------------------------------- Dec 16 13:13:32.531984 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:32.531984 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:32.531984 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: corporation. Support and training for ntp-4 are Dec 16 13:13:32.531984 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: available at https://www.nwtime.org/support Dec 16 13:13:32.531984 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: ---------------------------------------------------- Dec 16 13:13:32.531163 ntpd[1646]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:32.532879 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: proto: precision = 0.099 usec (-23) Dec 16 13:13:32.532879 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: basedate set to 2025-11-30 Dec 16 13:13:32.532879 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:32.532879 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:32.532879 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:32.531179 ntpd[1646]: ---------------------------------------------------- Dec 16 13:13:32.539483 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:32.539483 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: Listen normally on 3 eth0 10.128.0.75:123 Dec 16 13:13:32.539483 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:32.539483 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:4b%2]:123 Dec 16 13:13:32.539483 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: Listening on routing socket on fd #22 for interface updates Dec 16 13:13:32.539483 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:32.539483 ntpd[1646]: 16 Dec 13:13:32 ntpd[1646]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:32.531192 ntpd[1646]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:32.531206 ntpd[1646]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:32.531218 ntpd[1646]: corporation. Support and training for ntp-4 are Dec 16 13:13:32.531231 ntpd[1646]: available at https://www.nwtime.org/support Dec 16 13:13:32.531250 ntpd[1646]: ---------------------------------------------------- Dec 16 13:13:32.532307 ntpd[1646]: proto: precision = 0.099 usec (-23) Dec 16 13:13:32.532668 ntpd[1646]: basedate set to 2025-11-30 Dec 16 13:13:32.532689 ntpd[1646]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:32.532819 ntpd[1646]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:32.532858 ntpd[1646]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:32.533184 ntpd[1646]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:32.533228 ntpd[1646]: Listen normally on 3 eth0 10.128.0.75:123 Dec 16 13:13:32.533268 ntpd[1646]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:32.533306 ntpd[1646]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:4b%2]:123 Dec 16 13:13:32.533343 ntpd[1646]: Listening on routing socket on fd #22 for interface updates Dec 16 13:13:32.535882 ntpd[1646]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:32.535962 ntpd[1646]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:32.542327 polkitd[1629]: Started polkitd version 126 Dec 16 13:13:32.568404 polkitd[1629]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:13:32.574263 polkitd[1629]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:13:32.574357 polkitd[1629]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:13:32.574950 polkitd[1629]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:13:32.575005 polkitd[1629]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:13:32.575085 polkitd[1629]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:13:32.580398 polkitd[1629]: Finished loading, compiling and executing 2 rules Dec 16 13:13:32.582482 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:13:32.584627 dbus-daemon[1488]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:13:32.585361 polkitd[1629]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:13:32.639194 systemd-hostnamed[1581]: Hostname set to (transient) Dec 16 13:13:32.640902 systemd-resolved[1364]: System hostname changed to 'ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal'. Dec 16 13:13:32.651739 containerd[1515]: time="2025-12-16T13:13:32.650911557Z" level=info msg="Start subscribing containerd event" Dec 16 13:13:32.651739 containerd[1515]: time="2025-12-16T13:13:32.651048163Z" level=info msg="Start recovering state" Dec 16 13:13:32.651739 containerd[1515]: time="2025-12-16T13:13:32.651240543Z" level=info msg="Start event monitor" Dec 16 13:13:32.651739 containerd[1515]: time="2025-12-16T13:13:32.651260085Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:13:32.651739 containerd[1515]: time="2025-12-16T13:13:32.651270892Z" level=info msg="Start streaming server" Dec 16 13:13:32.651739 containerd[1515]: time="2025-12-16T13:13:32.651291468Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:13:32.651739 containerd[1515]: time="2025-12-16T13:13:32.651302854Z" level=info msg="runtime interface starting up..." Dec 16 13:13:32.651739 containerd[1515]: time="2025-12-16T13:13:32.651315127Z" level=info msg="starting plugins..." Dec 16 13:13:32.651739 containerd[1515]: time="2025-12-16T13:13:32.651334836Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:13:32.663652 containerd[1515]: time="2025-12-16T13:13:32.663578298Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:13:32.670953 containerd[1515]: time="2025-12-16T13:13:32.670814141Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:13:32.671311 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:13:32.679088 containerd[1515]: time="2025-12-16T13:13:32.679039528Z" level=info msg="containerd successfully booted in 0.549452s" Dec 16 13:13:32.796379 tar[1511]: linux-amd64/README.md Dec 16 13:13:32.821268 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:13:32.993050 instance-setup[1607]: INFO Running google_set_multiqueue. Dec 16 13:13:33.014645 instance-setup[1607]: INFO Set channels for eth0 to 2. Dec 16 13:13:33.020926 instance-setup[1607]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 16 13:13:33.023296 instance-setup[1607]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 16 13:13:33.024087 instance-setup[1607]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 16 13:13:33.025986 instance-setup[1607]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 16 13:13:33.026333 instance-setup[1607]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 16 13:13:33.028784 instance-setup[1607]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 16 13:13:33.028847 instance-setup[1607]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 16 13:13:33.030695 instance-setup[1607]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 16 13:13:33.040049 instance-setup[1607]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 16 13:13:33.044530 instance-setup[1607]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 16 13:13:33.047426 instance-setup[1607]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 16 13:13:33.047838 instance-setup[1607]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 16 13:13:33.069363 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:13:33.074901 init.sh[1593]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 16 13:13:33.083684 systemd[1]: Started sshd@0-10.128.0.75:22-139.178.68.195:39452.service - OpenSSH per-connection server daemon (139.178.68.195:39452). Dec 16 13:13:33.284976 startup-script[1695]: INFO Starting startup scripts. Dec 16 13:13:33.291001 startup-script[1695]: INFO No startup scripts found in metadata. Dec 16 13:13:33.291092 startup-script[1695]: INFO Finished running startup scripts. Dec 16 13:13:33.315116 init.sh[1593]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 16 13:13:33.315116 init.sh[1593]: + daemon_pids=() Dec 16 13:13:33.315116 init.sh[1593]: + for d in accounts clock_skew network Dec 16 13:13:33.315116 init.sh[1593]: + daemon_pids+=($!) Dec 16 13:13:33.315116 init.sh[1593]: + for d in accounts clock_skew network Dec 16 13:13:33.315445 init.sh[1701]: + /usr/bin/google_accounts_daemon Dec 16 13:13:33.316076 init.sh[1593]: + daemon_pids+=($!) Dec 16 13:13:33.316744 init.sh[1702]: + /usr/bin/google_clock_skew_daemon Dec 16 13:13:33.317197 init.sh[1593]: + for d in accounts clock_skew network Dec 16 13:13:33.317580 init.sh[1593]: + daemon_pids+=($!) Dec 16 13:13:33.318069 init.sh[1593]: + NOTIFY_SOCKET=/run/systemd/notify Dec 16 13:13:33.318359 init.sh[1703]: + /usr/bin/google_network_daemon Dec 16 13:13:33.319238 init.sh[1593]: + /usr/bin/systemd-notify --ready Dec 16 13:13:33.356488 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 16 13:13:33.368574 init.sh[1593]: + wait -n 1701 1702 1703 Dec 16 13:13:33.480441 sshd[1696]: Accepted publickey for core from 139.178.68.195 port 39452 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:13:33.484314 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:33.498652 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:13:33.511284 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:13:33.563897 systemd-logind[1505]: New session 1 of user core. Dec 16 13:13:33.578397 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:13:33.599369 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:13:33.651421 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:13:33.657063 systemd-logind[1505]: New session c1 of user core. Dec 16 13:13:33.849946 google-clock-skew[1702]: INFO Starting Google Clock Skew daemon. Dec 16 13:13:33.863918 google-clock-skew[1702]: INFO Clock drift token has changed: 0. Dec 16 13:13:33.971656 google-networking[1703]: INFO Starting Google Networking daemon. Dec 16 13:13:34.052471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:34.057705 groupadd[1723]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 16 13:13:34.064926 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:13:34.066760 groupadd[1723]: group added to /etc/gshadow: name=google-sudoers Dec 16 13:13:34.069188 systemd[1707]: Queued start job for default target default.target. Dec 16 13:13:34.075739 systemd[1707]: Created slice app.slice - User Application Slice. Dec 16 13:13:34.075786 systemd[1707]: Reached target paths.target - Paths. Dec 16 13:13:34.075860 systemd[1707]: Reached target timers.target - Timers. Dec 16 13:13:34.077328 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:13:34.079584 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:13:34.106268 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:13:34.106522 systemd[1707]: Reached target sockets.target - Sockets. Dec 16 13:13:34.106736 systemd[1707]: Reached target basic.target - Basic System. Dec 16 13:13:34.106903 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:13:34.107299 systemd[1707]: Reached target default.target - Main User Target. Dec 16 13:13:34.107356 systemd[1707]: Startup finished in 428ms. Dec 16 13:13:34.126151 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:13:34.135045 groupadd[1723]: new group: name=google-sudoers, GID=1000 Dec 16 13:13:34.136861 systemd[1]: Startup finished in 4.060s (kernel) + 7.266s (initrd) + 8.137s (userspace) = 19.464s. Dec 16 13:13:34.191639 google-accounts[1701]: INFO Starting Google Accounts daemon. Dec 16 13:13:34.204883 google-accounts[1701]: WARNING OS Login not installed. Dec 16 13:13:34.207157 google-accounts[1701]: INFO Creating a new user account for 0. Dec 16 13:13:34.212079 init.sh[1742]: useradd: invalid user name '0': use --badname to ignore Dec 16 13:13:34.212872 google-accounts[1701]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 16 13:13:34.000519 google-clock-skew[1702]: INFO Synced system time with hardware clock. Dec 16 13:13:34.016219 systemd-journald[1152]: Time jumped backwards, rotating. Dec 16 13:13:34.001705 systemd-resolved[1364]: Clock change detected. Flushing caches. Dec 16 13:13:34.138266 systemd[1]: Started sshd@1-10.128.0.75:22-139.178.68.195:39456.service - OpenSSH per-connection server daemon (139.178.68.195:39456). Dec 16 13:13:34.458722 sshd[1751]: Accepted publickey for core from 139.178.68.195 port 39456 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:13:34.460040 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:34.469300 systemd-logind[1505]: New session 2 of user core. Dec 16 13:13:34.474111 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:13:34.681976 sshd[1754]: Connection closed by 139.178.68.195 port 39456 Dec 16 13:13:34.684260 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:34.692773 systemd[1]: sshd@1-10.128.0.75:22-139.178.68.195:39456.service: Deactivated successfully. Dec 16 13:13:34.695604 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:13:34.698826 systemd-logind[1505]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:13:34.701472 systemd-logind[1505]: Removed session 2. Dec 16 13:13:34.702590 kubelet[1727]: E1216 13:13:34.702134 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:13:34.704849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:13:34.705120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:13:34.705684 systemd[1]: kubelet.service: Consumed 1.236s CPU time, 258.5M memory peak. Dec 16 13:13:34.733891 systemd[1]: Started sshd@2-10.128.0.75:22-139.178.68.195:39472.service - OpenSSH per-connection server daemon (139.178.68.195:39472). Dec 16 13:13:35.055772 sshd[1763]: Accepted publickey for core from 139.178.68.195 port 39472 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:13:35.057515 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:35.064981 systemd-logind[1505]: New session 3 of user core. Dec 16 13:13:35.076217 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:13:35.265617 sshd[1766]: Connection closed by 139.178.68.195 port 39472 Dec 16 13:13:35.266438 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:35.273161 systemd[1]: sshd@2-10.128.0.75:22-139.178.68.195:39472.service: Deactivated successfully. Dec 16 13:13:35.275484 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:13:35.276645 systemd-logind[1505]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:13:35.278819 systemd-logind[1505]: Removed session 3. Dec 16 13:13:35.319462 systemd[1]: Started sshd@3-10.128.0.75:22-139.178.68.195:39478.service - OpenSSH per-connection server daemon (139.178.68.195:39478). Dec 16 13:13:35.618798 sshd[1772]: Accepted publickey for core from 139.178.68.195 port 39478 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:13:35.620492 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:35.627978 systemd-logind[1505]: New session 4 of user core. Dec 16 13:13:35.634202 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:13:35.832072 sshd[1775]: Connection closed by 139.178.68.195 port 39478 Dec 16 13:13:35.832942 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:35.838179 systemd[1]: sshd@3-10.128.0.75:22-139.178.68.195:39478.service: Deactivated successfully. Dec 16 13:13:35.840635 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:13:35.843760 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:13:35.845220 systemd-logind[1505]: Removed session 4. Dec 16 13:13:35.888279 systemd[1]: Started sshd@4-10.128.0.75:22-139.178.68.195:39494.service - OpenSSH per-connection server daemon (139.178.68.195:39494). Dec 16 13:13:36.201581 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 39494 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:13:36.203344 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:36.210958 systemd-logind[1505]: New session 5 of user core. Dec 16 13:13:36.217181 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:13:36.394503 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:13:36.395019 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:13:36.412490 sudo[1785]: pam_unix(sudo:session): session closed for user root Dec 16 13:13:36.455204 sshd[1784]: Connection closed by 139.178.68.195 port 39494 Dec 16 13:13:36.456242 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:36.461790 systemd[1]: sshd@4-10.128.0.75:22-139.178.68.195:39494.service: Deactivated successfully. Dec 16 13:13:36.464312 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:13:36.466229 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:13:36.468339 systemd-logind[1505]: Removed session 5. Dec 16 13:13:36.522611 systemd[1]: Started sshd@5-10.128.0.75:22-139.178.68.195:39504.service - OpenSSH per-connection server daemon (139.178.68.195:39504). Dec 16 13:13:36.833075 sshd[1791]: Accepted publickey for core from 139.178.68.195 port 39504 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:13:36.834868 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:36.841979 systemd-logind[1505]: New session 6 of user core. Dec 16 13:13:36.853223 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:13:37.015656 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:13:37.016177 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:13:37.023419 sudo[1796]: pam_unix(sudo:session): session closed for user root Dec 16 13:13:37.037724 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:13:37.038247 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:13:37.052012 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:13:37.101586 augenrules[1818]: No rules Dec 16 13:13:37.102392 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:13:37.102721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:13:37.104128 sudo[1795]: pam_unix(sudo:session): session closed for user root Dec 16 13:13:37.148505 sshd[1794]: Connection closed by 139.178.68.195 port 39504 Dec 16 13:13:37.149380 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:37.154615 systemd[1]: sshd@5-10.128.0.75:22-139.178.68.195:39504.service: Deactivated successfully. Dec 16 13:13:37.157205 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:13:37.159352 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:13:37.161764 systemd-logind[1505]: Removed session 6. Dec 16 13:13:37.210227 systemd[1]: Started sshd@6-10.128.0.75:22-139.178.68.195:39510.service - OpenSSH per-connection server daemon (139.178.68.195:39510). Dec 16 13:13:37.527361 sshd[1827]: Accepted publickey for core from 139.178.68.195 port 39510 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:13:37.529398 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:37.536986 systemd-logind[1505]: New session 7 of user core. Dec 16 13:13:37.546172 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:13:37.710246 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:13:37.710751 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:13:38.211115 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:13:38.226627 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:13:38.584320 dockerd[1849]: time="2025-12-16T13:13:38.584143081Z" level=info msg="Starting up" Dec 16 13:13:38.586066 dockerd[1849]: time="2025-12-16T13:13:38.586017283Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:13:38.602162 dockerd[1849]: time="2025-12-16T13:13:38.602094563Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:13:38.768798 systemd[1]: var-lib-docker-metacopy\x2dcheck3095707444-merged.mount: Deactivated successfully. Dec 16 13:13:38.798059 dockerd[1849]: time="2025-12-16T13:13:38.797997711Z" level=info msg="Loading containers: start." Dec 16 13:13:38.818053 kernel: Initializing XFRM netlink socket Dec 16 13:13:39.163550 systemd-networkd[1412]: docker0: Link UP Dec 16 13:13:39.170067 dockerd[1849]: time="2025-12-16T13:13:39.170008079Z" level=info msg="Loading containers: done." Dec 16 13:13:39.191863 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2361148384-merged.mount: Deactivated successfully. Dec 16 13:13:39.193541 dockerd[1849]: time="2025-12-16T13:13:39.193482518Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:13:39.193886 dockerd[1849]: time="2025-12-16T13:13:39.193622851Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:13:39.193886 dockerd[1849]: time="2025-12-16T13:13:39.193746754Z" level=info msg="Initializing buildkit" Dec 16 13:13:39.226764 dockerd[1849]: time="2025-12-16T13:13:39.226710626Z" level=info msg="Completed buildkit initialization" Dec 16 13:13:39.236046 dockerd[1849]: time="2025-12-16T13:13:39.235966597Z" level=info msg="Daemon has completed initialization" Dec 16 13:13:39.236423 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:13:39.237978 dockerd[1849]: time="2025-12-16T13:13:39.237273779Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:13:40.094662 containerd[1515]: time="2025-12-16T13:13:40.094608385Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 13:13:40.594097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135427089.mount: Deactivated successfully. Dec 16 13:13:42.085271 containerd[1515]: time="2025-12-16T13:13:42.085195074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:42.087657 containerd[1515]: time="2025-12-16T13:13:42.087596975Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27075656" Dec 16 13:13:42.089936 containerd[1515]: time="2025-12-16T13:13:42.088728265Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:42.096158 containerd[1515]: time="2025-12-16T13:13:42.096092376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:42.097703 containerd[1515]: time="2025-12-16T13:13:42.096956097Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.002292817s" Dec 16 13:13:42.097703 containerd[1515]: time="2025-12-16T13:13:42.097009960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 13:13:42.098685 containerd[1515]: time="2025-12-16T13:13:42.098663065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 13:13:43.402473 containerd[1515]: time="2025-12-16T13:13:43.402375575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:43.404032 containerd[1515]: time="2025-12-16T13:13:43.403964218Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21164374" Dec 16 13:13:43.405991 containerd[1515]: time="2025-12-16T13:13:43.405219122Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:43.409462 containerd[1515]: time="2025-12-16T13:13:43.408847562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:43.410316 containerd[1515]: time="2025-12-16T13:13:43.410269364Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.311490966s" Dec 16 13:13:43.410412 containerd[1515]: time="2025-12-16T13:13:43.410323140Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 13:13:43.411118 containerd[1515]: time="2025-12-16T13:13:43.410963372Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 13:13:44.384677 containerd[1515]: time="2025-12-16T13:13:44.384605889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:44.386141 containerd[1515]: time="2025-12-16T13:13:44.386084334Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15727843" Dec 16 13:13:44.387814 containerd[1515]: time="2025-12-16T13:13:44.387131069Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:44.390547 containerd[1515]: time="2025-12-16T13:13:44.390503338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:44.391929 containerd[1515]: time="2025-12-16T13:13:44.391870334Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 980.835734ms" Dec 16 13:13:44.392034 containerd[1515]: time="2025-12-16T13:13:44.391937033Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 13:13:44.393364 containerd[1515]: time="2025-12-16T13:13:44.393320583Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:13:44.955847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:13:44.958734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:45.301131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:45.314981 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:13:45.404928 kubelet[2138]: E1216 13:13:45.404790 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:13:45.412167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:13:45.412591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:13:45.413556 systemd[1]: kubelet.service: Consumed 262ms CPU time, 108.2M memory peak. Dec 16 13:13:45.706188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481393200.mount: Deactivated successfully. Dec 16 13:13:46.210499 containerd[1515]: time="2025-12-16T13:13:46.210429887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:46.212150 containerd[1515]: time="2025-12-16T13:13:46.211795524Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25967188" Dec 16 13:13:46.213487 containerd[1515]: time="2025-12-16T13:13:46.213428889Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:46.216872 containerd[1515]: time="2025-12-16T13:13:46.216828904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:46.217965 containerd[1515]: time="2025-12-16T13:13:46.217673957Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.824315245s" Dec 16 13:13:46.217965 containerd[1515]: time="2025-12-16T13:13:46.217718067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:13:46.218727 containerd[1515]: time="2025-12-16T13:13:46.218669523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 13:13:46.615454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657271369.mount: Deactivated successfully. Dec 16 13:13:47.863470 containerd[1515]: time="2025-12-16T13:13:47.863391704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:47.864970 containerd[1515]: time="2025-12-16T13:13:47.864922933Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22394649" Dec 16 13:13:47.866795 containerd[1515]: time="2025-12-16T13:13:47.866210797Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:47.869810 containerd[1515]: time="2025-12-16T13:13:47.869766573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:47.871196 containerd[1515]: time="2025-12-16T13:13:47.871154516Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.65244789s" Dec 16 13:13:47.871345 containerd[1515]: time="2025-12-16T13:13:47.871321404Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 13:13:47.872243 containerd[1515]: time="2025-12-16T13:13:47.872193866Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 13:13:49.530483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946235841.mount: Deactivated successfully. Dec 16 13:13:49.537321 containerd[1515]: time="2025-12-16T13:13:49.537250387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:49.538750 containerd[1515]: time="2025-12-16T13:13:49.538486801Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=322152" Dec 16 13:13:49.539826 containerd[1515]: time="2025-12-16T13:13:49.539772162Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:49.543564 containerd[1515]: time="2025-12-16T13:13:49.543504673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:49.546833 containerd[1515]: time="2025-12-16T13:13:49.546784164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.674539404s" Dec 16 13:13:49.546962 containerd[1515]: time="2025-12-16T13:13:49.546837684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 13:13:49.548836 containerd[1515]: time="2025-12-16T13:13:49.548804245Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 13:13:50.042544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701504677.mount: Deactivated successfully. Dec 16 13:13:52.955934 containerd[1515]: time="2025-12-16T13:13:52.955847292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:52.957501 containerd[1515]: time="2025-12-16T13:13:52.957443920Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74172452" Dec 16 13:13:52.959018 containerd[1515]: time="2025-12-16T13:13:52.958971108Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:52.963296 containerd[1515]: time="2025-12-16T13:13:52.962633794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:13:52.964143 containerd[1515]: time="2025-12-16T13:13:52.964099893Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.415131347s" Dec 16 13:13:52.964245 containerd[1515]: time="2025-12-16T13:13:52.964148107Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 13:13:55.663274 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:13:55.668239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:56.000659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:56.011881 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:13:56.086580 kubelet[2288]: E1216 13:13:56.086484 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:13:56.091100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:13:56.091330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:13:56.092145 systemd[1]: kubelet.service: Consumed 245ms CPU time, 109.9M memory peak. Dec 16 13:13:58.343191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:58.344060 systemd[1]: kubelet.service: Consumed 245ms CPU time, 109.9M memory peak. Dec 16 13:13:58.347091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:58.391093 systemd[1]: Reload requested from client PID 2302 ('systemctl') (unit session-7.scope)... Dec 16 13:13:58.391118 systemd[1]: Reloading... Dec 16 13:13:58.595943 zram_generator::config[2347]: No configuration found. Dec 16 13:13:58.891436 systemd[1]: Reloading finished in 499 ms. Dec 16 13:13:58.972961 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:13:58.973307 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:13:58.973796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:58.973878 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.2M memory peak. Dec 16 13:13:58.976866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:59.304362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:59.318537 (kubelet)[2398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:13:59.379384 kubelet[2398]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:13:59.379384 kubelet[2398]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:13:59.379384 kubelet[2398]: I1216 13:13:59.378990 2398 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:14:00.270938 kubelet[2398]: I1216 13:14:00.270872 2398 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:14:00.270938 kubelet[2398]: I1216 13:14:00.270928 2398 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:14:00.272158 kubelet[2398]: I1216 13:14:00.272123 2398 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:14:00.272158 kubelet[2398]: I1216 13:14:00.272163 2398 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:14:00.272839 kubelet[2398]: I1216 13:14:00.272799 2398 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:14:00.282414 kubelet[2398]: E1216 13:14:00.282361 2398 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:14:00.283370 kubelet[2398]: I1216 13:14:00.283325 2398 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:14:00.288829 kubelet[2398]: I1216 13:14:00.288785 2398 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:14:00.293034 kubelet[2398]: I1216 13:14:00.292992 2398 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:14:00.293425 kubelet[2398]: I1216 13:14:00.293368 2398 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:14:00.293671 kubelet[2398]: I1216 13:14:00.293415 2398 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:14:00.293671 kubelet[2398]: I1216 13:14:00.293668 2398 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:14:00.293894 kubelet[2398]: I1216 13:14:00.293686 2398 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:14:00.293894 kubelet[2398]: I1216 13:14:00.293840 2398 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:14:00.296741 kubelet[2398]: I1216 13:14:00.296691 2398 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:00.298930 kubelet[2398]: I1216 13:14:00.297022 2398 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:14:00.298930 kubelet[2398]: I1216 13:14:00.297053 2398 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:14:00.298930 kubelet[2398]: I1216 13:14:00.297086 2398 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:14:00.298930 kubelet[2398]: I1216 13:14:00.297118 2398 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:14:00.298930 kubelet[2398]: E1216 13:14:00.297637 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:14:00.303284 kubelet[2398]: E1216 13:14:00.302379 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:14:00.303440 kubelet[2398]: I1216 13:14:00.303415 2398 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:14:00.304323 kubelet[2398]: I1216 13:14:00.304294 2398 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:14:00.304414 kubelet[2398]: I1216 13:14:00.304351 2398 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:14:00.304474 kubelet[2398]: W1216 13:14:00.304417 2398 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:14:00.321733 kubelet[2398]: I1216 13:14:00.321672 2398 server.go:1262] "Started kubelet" Dec 16 13:14:00.329367 kubelet[2398]: I1216 13:14:00.329310 2398 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:14:00.336968 kubelet[2398]: I1216 13:14:00.334895 2398 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:14:00.339419 kubelet[2398]: I1216 13:14:00.339341 2398 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:14:00.339616 kubelet[2398]: I1216 13:14:00.339590 2398 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:14:00.342541 kubelet[2398]: I1216 13:14:00.342507 2398 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:14:00.342698 kubelet[2398]: I1216 13:14:00.342043 2398 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:14:00.344368 kubelet[2398]: I1216 13:14:00.344337 2398 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:14:00.349957 kubelet[2398]: I1216 13:14:00.348008 2398 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:14:00.356206 kubelet[2398]: I1216 13:14:00.348025 2398 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:14:00.356374 kubelet[2398]: E1216 13:14:00.348069 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" Dec 16 13:14:00.356469 kubelet[2398]: E1216 13:14:00.356384 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:14:00.356551 kubelet[2398]: E1216 13:14:00.356494 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="200ms" Dec 16 13:14:00.356743 kubelet[2398]: I1216 13:14:00.356726 2398 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:14:00.357795 kubelet[2398]: I1216 13:14:00.357767 2398 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:14:00.358058 kubelet[2398]: I1216 13:14:00.358028 2398 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:14:00.359406 kubelet[2398]: E1216 13:14:00.356606 2398 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal.1881b45cf97aa756 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,UID:ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,},FirstTimestamp:2025-12-16 13:14:00.321501014 +0000 UTC m=+0.996806935,LastTimestamp:2025-12-16 13:14:00.321501014 +0000 UTC m=+0.996806935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,}" Dec 16 13:14:00.360626 kubelet[2398]: I1216 13:14:00.360603 2398 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:14:00.384069 kubelet[2398]: I1216 13:14:00.384008 2398 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:14:00.387013 kubelet[2398]: I1216 13:14:00.386853 2398 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:14:00.387013 kubelet[2398]: I1216 13:14:00.386888 2398 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:14:00.387278 kubelet[2398]: I1216 13:14:00.387254 2398 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:14:00.387475 kubelet[2398]: E1216 13:14:00.387435 2398 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:14:00.400942 kubelet[2398]: E1216 13:14:00.400161 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:14:00.415255 kubelet[2398]: I1216 13:14:00.415223 2398 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:14:00.415255 kubelet[2398]: I1216 13:14:00.415248 2398 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:14:00.415460 kubelet[2398]: I1216 13:14:00.415272 2398 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:00.417646 kubelet[2398]: I1216 13:14:00.417603 2398 policy_none.go:49] "None policy: Start" Dec 16 13:14:00.417646 kubelet[2398]: I1216 13:14:00.417630 2398 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:14:00.417646 kubelet[2398]: I1216 13:14:00.417649 2398 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:14:00.419662 kubelet[2398]: I1216 13:14:00.419640 2398 policy_none.go:47] "Start" Dec 16 13:14:00.426101 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:14:00.445590 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:14:00.451867 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:14:00.457040 kubelet[2398]: E1216 13:14:00.456995 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" Dec 16 13:14:00.459814 kubelet[2398]: E1216 13:14:00.459787 2398 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:14:00.461226 kubelet[2398]: I1216 13:14:00.461195 2398 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:14:00.462371 kubelet[2398]: I1216 13:14:00.461299 2398 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:14:00.463894 kubelet[2398]: I1216 13:14:00.463604 2398 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:14:00.464240 kubelet[2398]: E1216 13:14:00.464217 2398 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:14:00.464417 kubelet[2398]: E1216 13:14:00.464396 2398 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" Dec 16 13:14:00.509273 systemd[1]: Created slice kubepods-burstable-pod339485e226be1c63c2fbab6f2ad2b860.slice - libcontainer container kubepods-burstable-pod339485e226be1c63c2fbab6f2ad2b860.slice. Dec 16 13:14:00.518347 kubelet[2398]: E1216 13:14:00.518227 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.525504 systemd[1]: Created slice kubepods-burstable-podd1570d999f35ed938ad98f7289a57d65.slice - libcontainer container kubepods-burstable-podd1570d999f35ed938ad98f7289a57d65.slice. Dec 16 13:14:00.531118 kubelet[2398]: E1216 13:14:00.531064 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.536140 systemd[1]: Created slice kubepods-burstable-pod9b7afe73b234c57f68e665106a0e905b.slice - libcontainer container kubepods-burstable-pod9b7afe73b234c57f68e665106a0e905b.slice. Dec 16 13:14:00.538842 kubelet[2398]: E1216 13:14:00.538808 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.557733 kubelet[2398]: E1216 13:14:00.557673 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="400ms" Dec 16 13:14:00.569154 kubelet[2398]: I1216 13:14:00.569038 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.569701 kubelet[2398]: E1216 13:14:00.569647 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.657318 kubelet[2398]: I1216 13:14:00.657233 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1570d999f35ed938ad98f7289a57d65-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"d1570d999f35ed938ad98f7289a57d65\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.657318 kubelet[2398]: I1216 13:14:00.657301 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1570d999f35ed938ad98f7289a57d65-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"d1570d999f35ed938ad98f7289a57d65\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.657566 kubelet[2398]: I1216 13:14:00.657350 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1570d999f35ed938ad98f7289a57d65-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"d1570d999f35ed938ad98f7289a57d65\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.657566 kubelet[2398]: I1216 13:14:00.657406 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1570d999f35ed938ad98f7289a57d65-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"d1570d999f35ed938ad98f7289a57d65\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.657566 kubelet[2398]: I1216 13:14:00.657446 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/339485e226be1c63c2fbab6f2ad2b860-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"339485e226be1c63c2fbab6f2ad2b860\") " pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.657566 kubelet[2398]: I1216 13:14:00.657483 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/339485e226be1c63c2fbab6f2ad2b860-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"339485e226be1c63c2fbab6f2ad2b860\") " pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.657767 kubelet[2398]: I1216 13:14:00.657511 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1570d999f35ed938ad98f7289a57d65-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"d1570d999f35ed938ad98f7289a57d65\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.657767 kubelet[2398]: I1216 13:14:00.657538 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b7afe73b234c57f68e665106a0e905b-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"9b7afe73b234c57f68e665106a0e905b\") " pod="kube-system/kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.657767 kubelet[2398]: I1216 13:14:00.657574 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/339485e226be1c63c2fbab6f2ad2b860-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"339485e226be1c63c2fbab6f2ad2b860\") " pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.776122 kubelet[2398]: I1216 13:14:00.775970 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.776509 kubelet[2398]: E1216 13:14:00.776464 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:00.823260 containerd[1515]: time="2025-12-16T13:14:00.823192492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,Uid:339485e226be1c63c2fbab6f2ad2b860,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:00.834934 containerd[1515]: time="2025-12-16T13:14:00.834772155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,Uid:d1570d999f35ed938ad98f7289a57d65,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:00.842152 containerd[1515]: time="2025-12-16T13:14:00.842104340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,Uid:9b7afe73b234c57f68e665106a0e905b,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:00.958628 kubelet[2398]: E1216 13:14:00.958567 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="800ms" Dec 16 13:14:01.159518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2970892248.mount: Deactivated successfully. Dec 16 13:14:01.168615 containerd[1515]: time="2025-12-16T13:14:01.168555841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:14:01.172515 containerd[1515]: time="2025-12-16T13:14:01.172442657Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Dec 16 13:14:01.173887 containerd[1515]: time="2025-12-16T13:14:01.173825735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:14:01.175486 containerd[1515]: time="2025-12-16T13:14:01.175407441Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:14:01.177595 containerd[1515]: time="2025-12-16T13:14:01.177540710Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:14:01.179188 containerd[1515]: time="2025-12-16T13:14:01.179132499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:14:01.180265 containerd[1515]: time="2025-12-16T13:14:01.180210262Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:14:01.182943 containerd[1515]: time="2025-12-16T13:14:01.181382151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:14:01.182943 containerd[1515]: time="2025-12-16T13:14:01.182535962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 357.031669ms" Dec 16 13:14:01.186359 kubelet[2398]: I1216 13:14:01.186322 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:01.186829 containerd[1515]: time="2025-12-16T13:14:01.186785235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 343.134544ms" Dec 16 13:14:01.187418 kubelet[2398]: E1216 13:14:01.187351 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:01.193089 containerd[1515]: time="2025-12-16T13:14:01.192979954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 355.812115ms" Dec 16 13:14:01.209514 kubelet[2398]: E1216 13:14:01.209356 2398 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal.1881b45cf97aa756 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,UID:ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,},FirstTimestamp:2025-12-16 13:14:00.321501014 +0000 UTC m=+0.996806935,LastTimestamp:2025-12-16 13:14:00.321501014 +0000 UTC m=+0.996806935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,}" Dec 16 13:14:01.213950 containerd[1515]: time="2025-12-16T13:14:01.213170223Z" level=info msg="connecting to shim b5841a5e4dc078511b5ba7dfa93a1ec5489ac03f4eec27d008b773eeb2b113aa" address="unix:///run/containerd/s/15668a91dd5f646964c0c46de2287592690c5bdcafcc2fde7afe507d97cf0f8a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:01.247664 containerd[1515]: time="2025-12-16T13:14:01.247597059Z" level=info msg="connecting to shim 3b11ce46d3dcff1924b591e22e79aa7d662f34ffe6058c0a887f79a04eab7d4d" address="unix:///run/containerd/s/cf3f26b8b409bf72f2e0c61174b143452cbbca45ea674e5376de94ebf45c41f8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:01.263060 kubelet[2398]: E1216 13:14:01.263014 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:14:01.284830 containerd[1515]: time="2025-12-16T13:14:01.284756609Z" level=info msg="connecting to shim 11ac1f8994bbff4132ba7ceb4135e44dcd4f19f7ddb9c2fbf44cac7fef06ca81" address="unix:///run/containerd/s/3ac1bcf7a52d6005346d98f47ccc372ec2daebfa394527e4441a62532a476cfc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:01.290643 systemd[1]: Started cri-containerd-b5841a5e4dc078511b5ba7dfa93a1ec5489ac03f4eec27d008b773eeb2b113aa.scope - libcontainer container b5841a5e4dc078511b5ba7dfa93a1ec5489ac03f4eec27d008b773eeb2b113aa. Dec 16 13:14:01.326156 systemd[1]: Started cri-containerd-3b11ce46d3dcff1924b591e22e79aa7d662f34ffe6058c0a887f79a04eab7d4d.scope - libcontainer container 3b11ce46d3dcff1924b591e22e79aa7d662f34ffe6058c0a887f79a04eab7d4d. Dec 16 13:14:01.331486 kubelet[2398]: E1216 13:14:01.331033 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:14:01.351461 systemd[1]: Started cri-containerd-11ac1f8994bbff4132ba7ceb4135e44dcd4f19f7ddb9c2fbf44cac7fef06ca81.scope - libcontainer container 11ac1f8994bbff4132ba7ceb4135e44dcd4f19f7ddb9c2fbf44cac7fef06ca81. Dec 16 13:14:01.434757 containerd[1515]: time="2025-12-16T13:14:01.434611648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,Uid:339485e226be1c63c2fbab6f2ad2b860,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5841a5e4dc078511b5ba7dfa93a1ec5489ac03f4eec27d008b773eeb2b113aa\"" Dec 16 13:14:01.440334 kubelet[2398]: E1216 13:14:01.440154 2398 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-21291" Dec 16 13:14:01.448399 containerd[1515]: time="2025-12-16T13:14:01.448302966Z" level=info msg="CreateContainer within sandbox \"b5841a5e4dc078511b5ba7dfa93a1ec5489ac03f4eec27d008b773eeb2b113aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:14:01.474839 containerd[1515]: time="2025-12-16T13:14:01.474184069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,Uid:d1570d999f35ed938ad98f7289a57d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"11ac1f8994bbff4132ba7ceb4135e44dcd4f19f7ddb9c2fbf44cac7fef06ca81\"" Dec 16 13:14:01.481091 containerd[1515]: time="2025-12-16T13:14:01.479894840Z" level=info msg="Container 536549f2a046d432768e6a3c048fc6c75fb9a8f243b46730cf431b33068137f5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:01.481893 kubelet[2398]: E1216 13:14:01.481471 2398 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flat" Dec 16 13:14:01.494671 containerd[1515]: time="2025-12-16T13:14:01.494614003Z" level=info msg="CreateContainer within sandbox \"11ac1f8994bbff4132ba7ceb4135e44dcd4f19f7ddb9c2fbf44cac7fef06ca81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:14:01.502702 containerd[1515]: time="2025-12-16T13:14:01.502644259Z" level=info msg="CreateContainer within sandbox \"b5841a5e4dc078511b5ba7dfa93a1ec5489ac03f4eec27d008b773eeb2b113aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"536549f2a046d432768e6a3c048fc6c75fb9a8f243b46730cf431b33068137f5\"" Dec 16 13:14:01.513950 containerd[1515]: time="2025-12-16T13:14:01.513253693Z" level=info msg="Container 2120574b5092db7269ee45736d1ac5e496ffa36243b1e813a9a776ae37f423ce: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:01.518038 containerd[1515]: time="2025-12-16T13:14:01.517997120Z" level=info msg="StartContainer for \"536549f2a046d432768e6a3c048fc6c75fb9a8f243b46730cf431b33068137f5\"" Dec 16 13:14:01.520496 containerd[1515]: time="2025-12-16T13:14:01.520443397Z" level=info msg="connecting to shim 536549f2a046d432768e6a3c048fc6c75fb9a8f243b46730cf431b33068137f5" address="unix:///run/containerd/s/15668a91dd5f646964c0c46de2287592690c5bdcafcc2fde7afe507d97cf0f8a" protocol=ttrpc version=3 Dec 16 13:14:01.523934 containerd[1515]: time="2025-12-16T13:14:01.523194284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal,Uid:9b7afe73b234c57f68e665106a0e905b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b11ce46d3dcff1924b591e22e79aa7d662f34ffe6058c0a887f79a04eab7d4d\"" Dec 16 13:14:01.526290 kubelet[2398]: E1216 13:14:01.526251 2398 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-21291" Dec 16 13:14:01.527741 kubelet[2398]: E1216 13:14:01.527694 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:14:01.531337 containerd[1515]: time="2025-12-16T13:14:01.531297680Z" level=info msg="CreateContainer within sandbox \"3b11ce46d3dcff1924b591e22e79aa7d662f34ffe6058c0a887f79a04eab7d4d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:14:01.532217 containerd[1515]: time="2025-12-16T13:14:01.532183151Z" level=info msg="CreateContainer within sandbox \"11ac1f8994bbff4132ba7ceb4135e44dcd4f19f7ddb9c2fbf44cac7fef06ca81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2120574b5092db7269ee45736d1ac5e496ffa36243b1e813a9a776ae37f423ce\"" Dec 16 13:14:01.534051 containerd[1515]: time="2025-12-16T13:14:01.534021212Z" level=info msg="StartContainer for \"2120574b5092db7269ee45736d1ac5e496ffa36243b1e813a9a776ae37f423ce\"" Dec 16 13:14:01.536185 containerd[1515]: time="2025-12-16T13:14:01.536155537Z" level=info msg="connecting to shim 2120574b5092db7269ee45736d1ac5e496ffa36243b1e813a9a776ae37f423ce" address="unix:///run/containerd/s/3ac1bcf7a52d6005346d98f47ccc372ec2daebfa394527e4441a62532a476cfc" protocol=ttrpc version=3 Dec 16 13:14:01.548141 containerd[1515]: time="2025-12-16T13:14:01.547990867Z" level=info msg="Container 8be890518151f07382d09f15d80dd971dc4a4386d728b832038d649606b3870a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:01.564117 containerd[1515]: time="2025-12-16T13:14:01.564068515Z" level=info msg="CreateContainer within sandbox \"3b11ce46d3dcff1924b591e22e79aa7d662f34ffe6058c0a887f79a04eab7d4d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8be890518151f07382d09f15d80dd971dc4a4386d728b832038d649606b3870a\"" Dec 16 13:14:01.565429 systemd[1]: Started cri-containerd-536549f2a046d432768e6a3c048fc6c75fb9a8f243b46730cf431b33068137f5.scope - libcontainer container 536549f2a046d432768e6a3c048fc6c75fb9a8f243b46730cf431b33068137f5. Dec 16 13:14:01.572387 containerd[1515]: time="2025-12-16T13:14:01.572176265Z" level=info msg="StartContainer for \"8be890518151f07382d09f15d80dd971dc4a4386d728b832038d649606b3870a\"" Dec 16 13:14:01.576824 kubelet[2398]: E1216 13:14:01.576765 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:14:01.584692 containerd[1515]: time="2025-12-16T13:14:01.584083634Z" level=info msg="connecting to shim 8be890518151f07382d09f15d80dd971dc4a4386d728b832038d649606b3870a" address="unix:///run/containerd/s/cf3f26b8b409bf72f2e0c61174b143452cbbca45ea674e5376de94ebf45c41f8" protocol=ttrpc version=3 Dec 16 13:14:01.589304 systemd[1]: Started cri-containerd-2120574b5092db7269ee45736d1ac5e496ffa36243b1e813a9a776ae37f423ce.scope - libcontainer container 2120574b5092db7269ee45736d1ac5e496ffa36243b1e813a9a776ae37f423ce. Dec 16 13:14:01.651138 systemd[1]: Started cri-containerd-8be890518151f07382d09f15d80dd971dc4a4386d728b832038d649606b3870a.scope - libcontainer container 8be890518151f07382d09f15d80dd971dc4a4386d728b832038d649606b3870a. Dec 16 13:14:01.779464 kubelet[2398]: E1216 13:14:01.760225 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="1.6s" Dec 16 13:14:01.819636 containerd[1515]: time="2025-12-16T13:14:01.819506060Z" level=info msg="StartContainer for \"2120574b5092db7269ee45736d1ac5e496ffa36243b1e813a9a776ae37f423ce\" returns successfully" Dec 16 13:14:01.822934 containerd[1515]: time="2025-12-16T13:14:01.822874158Z" level=info msg="StartContainer for \"536549f2a046d432768e6a3c048fc6c75fb9a8f243b46730cf431b33068137f5\" returns successfully" Dec 16 13:14:01.854238 containerd[1515]: time="2025-12-16T13:14:01.854162527Z" level=info msg="StartContainer for \"8be890518151f07382d09f15d80dd971dc4a4386d728b832038d649606b3870a\" returns successfully" Dec 16 13:14:01.994794 kubelet[2398]: I1216 13:14:01.994406 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:02.430301 kubelet[2398]: E1216 13:14:02.429638 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:02.434933 kubelet[2398]: E1216 13:14:02.433713 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:02.441618 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:14:02.447449 kubelet[2398]: E1216 13:14:02.447414 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:03.449374 kubelet[2398]: E1216 13:14:03.449281 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:03.450611 kubelet[2398]: E1216 13:14:03.450363 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:05.057761 kubelet[2398]: E1216 13:14:05.057699 2398 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:05.160977 kubelet[2398]: E1216 13:14:05.159792 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" not found" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:05.166417 kubelet[2398]: I1216 13:14:05.166369 2398 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:05.249391 kubelet[2398]: I1216 13:14:05.249335 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:05.286392 kubelet[2398]: E1216 13:14:05.286321 2398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:05.286392 kubelet[2398]: I1216 13:14:05.286398 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:05.303132 kubelet[2398]: E1216 13:14:05.302399 2398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:05.303132 kubelet[2398]: I1216 13:14:05.302947 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:05.305731 kubelet[2398]: I1216 13:14:05.305500 2398 apiserver.go:52] "Watching apiserver" Dec 16 13:14:05.309797 kubelet[2398]: E1216 13:14:05.309680 2398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:05.357051 kubelet[2398]: I1216 13:14:05.356981 2398 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:14:07.347549 systemd[1]: Reload requested from client PID 2680 ('systemctl') (unit session-7.scope)... Dec 16 13:14:07.347572 systemd[1]: Reloading... Dec 16 13:14:07.526947 zram_generator::config[2724]: No configuration found. Dec 16 13:14:07.931435 systemd[1]: Reloading finished in 583 ms. Dec 16 13:14:07.961573 kubelet[2398]: I1216 13:14:07.958719 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:07.979483 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:07.990450 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:14:07.990880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:07.991058 systemd[1]: kubelet.service: Consumed 1.571s CPU time, 124.9M memory peak. Dec 16 13:14:07.994687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:08.600426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:08.618069 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:14:08.695519 kubelet[2772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:14:08.697823 kubelet[2772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:14:08.697823 kubelet[2772]: I1216 13:14:08.696399 2772 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:14:08.710933 kubelet[2772]: I1216 13:14:08.710780 2772 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:14:08.710933 kubelet[2772]: I1216 13:14:08.710815 2772 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:14:08.710933 kubelet[2772]: I1216 13:14:08.710855 2772 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:14:08.710933 kubelet[2772]: I1216 13:14:08.710866 2772 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:14:08.711932 kubelet[2772]: I1216 13:14:08.711233 2772 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:14:08.713678 kubelet[2772]: I1216 13:14:08.713423 2772 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:14:08.718664 kubelet[2772]: I1216 13:14:08.718619 2772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:14:08.725688 kubelet[2772]: I1216 13:14:08.725653 2772 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:14:08.734454 kubelet[2772]: I1216 13:14:08.734414 2772 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:14:08.734872 kubelet[2772]: I1216 13:14:08.734806 2772 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:14:08.736499 kubelet[2772]: I1216 13:14:08.734852 2772 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:14:08.736499 kubelet[2772]: I1216 13:14:08.736426 2772 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:14:08.736499 kubelet[2772]: I1216 13:14:08.736445 2772 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:14:08.736499 kubelet[2772]: I1216 13:14:08.736486 2772 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:14:08.742767 kubelet[2772]: I1216 13:14:08.740537 2772 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:08.742767 kubelet[2772]: I1216 13:14:08.740802 2772 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:14:08.742767 kubelet[2772]: I1216 13:14:08.740830 2772 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:14:08.742767 kubelet[2772]: I1216 13:14:08.740866 2772 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:14:08.742767 kubelet[2772]: I1216 13:14:08.740896 2772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:14:08.747131 kubelet[2772]: I1216 13:14:08.747105 2772 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:14:08.751054 kubelet[2772]: I1216 13:14:08.749452 2772 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:14:08.751333 kubelet[2772]: I1216 13:14:08.751314 2772 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:14:08.794788 kubelet[2772]: I1216 13:14:08.794428 2772 server.go:1262] "Started kubelet" Dec 16 13:14:08.795003 kubelet[2772]: I1216 13:14:08.794954 2772 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:14:08.796387 kubelet[2772]: I1216 13:14:08.795540 2772 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:14:08.796387 kubelet[2772]: I1216 13:14:08.795632 2772 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:14:08.796387 kubelet[2772]: I1216 13:14:08.796146 2772 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:14:08.805179 kubelet[2772]: I1216 13:14:08.804295 2772 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:14:08.806517 kubelet[2772]: I1216 13:14:08.805381 2772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:14:08.806859 kubelet[2772]: I1216 13:14:08.806827 2772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:14:08.810967 kubelet[2772]: I1216 13:14:08.810690 2772 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:14:08.811087 kubelet[2772]: I1216 13:14:08.811021 2772 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:14:08.811379 kubelet[2772]: I1216 13:14:08.811323 2772 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:14:08.825756 kubelet[2772]: I1216 13:14:08.825267 2772 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:14:08.825756 kubelet[2772]: I1216 13:14:08.825419 2772 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:14:08.829512 kubelet[2772]: I1216 13:14:08.829399 2772 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:14:08.843148 kubelet[2772]: E1216 13:14:08.841485 2772 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:14:08.886384 kubelet[2772]: I1216 13:14:08.885852 2772 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:14:08.888413 kubelet[2772]: I1216 13:14:08.887887 2772 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:14:08.888413 kubelet[2772]: I1216 13:14:08.887940 2772 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:14:08.888413 kubelet[2772]: I1216 13:14:08.887976 2772 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:14:08.888413 kubelet[2772]: E1216 13:14:08.888081 2772 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:14:08.960416 kubelet[2772]: I1216 13:14:08.960366 2772 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:14:08.960416 kubelet[2772]: I1216 13:14:08.960388 2772 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:14:08.960416 kubelet[2772]: I1216 13:14:08.960416 2772 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:08.960684 kubelet[2772]: I1216 13:14:08.960610 2772 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:14:08.960684 kubelet[2772]: I1216 13:14:08.960625 2772 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:14:08.960684 kubelet[2772]: I1216 13:14:08.960652 2772 policy_none.go:49] "None policy: Start" Dec 16 13:14:08.960684 kubelet[2772]: I1216 13:14:08.960668 2772 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:14:08.960684 kubelet[2772]: I1216 13:14:08.960685 2772 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:14:08.962501 kubelet[2772]: I1216 13:14:08.960842 2772 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:14:08.962501 kubelet[2772]: I1216 13:14:08.960855 2772 policy_none.go:47] "Start" Dec 16 13:14:08.974110 kubelet[2772]: E1216 13:14:08.974069 2772 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:14:08.975857 kubelet[2772]: I1216 13:14:08.975295 2772 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:14:08.975857 kubelet[2772]: I1216 13:14:08.975319 2772 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:14:08.975857 kubelet[2772]: I1216 13:14:08.975741 2772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:14:08.981884 kubelet[2772]: E1216 13:14:08.981835 2772 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:14:08.991068 kubelet[2772]: I1216 13:14:08.990826 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:08.994505 kubelet[2772]: I1216 13:14:08.994474 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:08.996113 kubelet[2772]: I1216 13:14:08.994861 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.010860 kubelet[2772]: I1216 13:14:09.010815 2772 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 13:14:09.014164 kubelet[2772]: I1216 13:14:09.014122 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1570d999f35ed938ad98f7289a57d65-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"d1570d999f35ed938ad98f7289a57d65\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.014420 kubelet[2772]: I1216 13:14:09.014174 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b7afe73b234c57f68e665106a0e905b-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"9b7afe73b234c57f68e665106a0e905b\") " pod="kube-system/kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.014420 kubelet[2772]: I1216 13:14:09.014207 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/339485e226be1c63c2fbab6f2ad2b860-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"339485e226be1c63c2fbab6f2ad2b860\") " pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.014420 kubelet[2772]: I1216 13:14:09.014235 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1570d999f35ed938ad98f7289a57d65-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"d1570d999f35ed938ad98f7289a57d65\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.014420 kubelet[2772]: I1216 13:14:09.014262 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/339485e226be1c63c2fbab6f2ad2b860-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"339485e226be1c63c2fbab6f2ad2b860\") " pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.014799 kubelet[2772]: I1216 13:14:09.014286 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/339485e226be1c63c2fbab6f2ad2b860-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"339485e226be1c63c2fbab6f2ad2b860\") " pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.014799 kubelet[2772]: I1216 13:14:09.014314 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1570d999f35ed938ad98f7289a57d65-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"d1570d999f35ed938ad98f7289a57d65\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.014799 kubelet[2772]: I1216 13:14:09.014343 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1570d999f35ed938ad98f7289a57d65-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"d1570d999f35ed938ad98f7289a57d65\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.014799 kubelet[2772]: I1216 13:14:09.014390 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1570d999f35ed938ad98f7289a57d65-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" (UID: \"d1570d999f35ed938ad98f7289a57d65\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.020189 kubelet[2772]: I1216 13:14:09.020143 2772 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 13:14:09.020316 kubelet[2772]: I1216 13:14:09.020281 2772 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 13:14:09.021928 kubelet[2772]: E1216 13:14:09.020472 2772 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.090966 kubelet[2772]: I1216 13:14:09.089570 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.100970 kubelet[2772]: I1216 13:14:09.100063 2772 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.100970 kubelet[2772]: I1216 13:14:09.100183 2772 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.759066 kubelet[2772]: I1216 13:14:09.759016 2772 apiserver.go:52] "Watching apiserver" Dec 16 13:14:09.812180 kubelet[2772]: I1216 13:14:09.812094 2772 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:14:09.949287 kubelet[2772]: I1216 13:14:09.947307 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.950516 kubelet[2772]: I1216 13:14:09.950480 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.962046 kubelet[2772]: I1216 13:14:09.962012 2772 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 13:14:09.962537 kubelet[2772]: I1216 13:14:09.962208 2772 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 13:14:09.962942 kubelet[2772]: E1216 13:14:09.962663 2772 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:09.963063 kubelet[2772]: E1216 13:14:09.962512 2772 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:10.030514 kubelet[2772]: I1216 13:14:10.029711 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" podStartSLOduration=1.02965335 podStartE2EDuration="1.02965335s" podCreationTimestamp="2025-12-16 13:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:09.997469489 +0000 UTC m=+1.372800105" watchObservedRunningTime="2025-12-16 13:14:10.02965335 +0000 UTC m=+1.404983966" Dec 16 13:14:10.071945 kubelet[2772]: I1216 13:14:10.071800 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" podStartSLOduration=1.071772089 podStartE2EDuration="1.071772089s" podCreationTimestamp="2025-12-16 13:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:10.032208551 +0000 UTC m=+1.407539169" watchObservedRunningTime="2025-12-16 13:14:10.071772089 +0000 UTC m=+1.447102707" Dec 16 13:14:12.767449 kubelet[2772]: I1216 13:14:12.767399 2772 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:14:12.768414 containerd[1515]: time="2025-12-16T13:14:12.768089947Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:14:12.769053 kubelet[2772]: I1216 13:14:12.768465 2772 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:14:13.471705 kubelet[2772]: I1216 13:14:13.471622 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" podStartSLOduration=6.471596128 podStartE2EDuration="6.471596128s" podCreationTimestamp="2025-12-16 13:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:10.072966902 +0000 UTC m=+1.448297518" watchObservedRunningTime="2025-12-16 13:14:13.471596128 +0000 UTC m=+4.846926748" Dec 16 13:14:13.491370 systemd[1]: Created slice kubepods-besteffort-podde4568aa_ce71_4c3d_bede_6c2f88f451ed.slice - libcontainer container kubepods-besteffort-podde4568aa_ce71_4c3d_bede_6c2f88f451ed.slice. Dec 16 13:14:13.547158 kubelet[2772]: I1216 13:14:13.547016 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de4568aa-ce71-4c3d-bede-6c2f88f451ed-xtables-lock\") pod \"kube-proxy-67w48\" (UID: \"de4568aa-ce71-4c3d-bede-6c2f88f451ed\") " pod="kube-system/kube-proxy-67w48" Dec 16 13:14:13.547708 kubelet[2772]: I1216 13:14:13.547647 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de4568aa-ce71-4c3d-bede-6c2f88f451ed-lib-modules\") pod \"kube-proxy-67w48\" (UID: \"de4568aa-ce71-4c3d-bede-6c2f88f451ed\") " pod="kube-system/kube-proxy-67w48" Dec 16 13:14:13.548100 kubelet[2772]: I1216 13:14:13.548072 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de4568aa-ce71-4c3d-bede-6c2f88f451ed-kube-proxy\") pod \"kube-proxy-67w48\" (UID: \"de4568aa-ce71-4c3d-bede-6c2f88f451ed\") " pod="kube-system/kube-proxy-67w48" Dec 16 13:14:13.548100 kubelet[2772]: I1216 13:14:13.548164 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bzr9\" (UniqueName: \"kubernetes.io/projected/de4568aa-ce71-4c3d-bede-6c2f88f451ed-kube-api-access-9bzr9\") pod \"kube-proxy-67w48\" (UID: \"de4568aa-ce71-4c3d-bede-6c2f88f451ed\") " pod="kube-system/kube-proxy-67w48" Dec 16 13:14:13.658378 kubelet[2772]: E1216 13:14:13.658306 2772 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:14:13.658378 kubelet[2772]: E1216 13:14:13.658369 2772 projected.go:196] Error preparing data for projected volume kube-api-access-9bzr9 for pod kube-system/kube-proxy-67w48: configmap "kube-root-ca.crt" not found Dec 16 13:14:13.660031 kubelet[2772]: E1216 13:14:13.659976 2772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/de4568aa-ce71-4c3d-bede-6c2f88f451ed-kube-api-access-9bzr9 podName:de4568aa-ce71-4c3d-bede-6c2f88f451ed nodeName:}" failed. No retries permitted until 2025-12-16 13:14:14.158463071 +0000 UTC m=+5.533793667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9bzr9" (UniqueName: "kubernetes.io/projected/de4568aa-ce71-4c3d-bede-6c2f88f451ed-kube-api-access-9bzr9") pod "kube-proxy-67w48" (UID: "de4568aa-ce71-4c3d-bede-6c2f88f451ed") : configmap "kube-root-ca.crt" not found Dec 16 13:14:14.023145 systemd[1]: Created slice kubepods-besteffort-pod8a7a4716_62b7_42e6_ac08_a8eea1bc07fe.slice - libcontainer container kubepods-besteffort-pod8a7a4716_62b7_42e6_ac08_a8eea1bc07fe.slice. Dec 16 13:14:14.052718 kubelet[2772]: I1216 13:14:14.052659 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8a7a4716-62b7-42e6-ac08-a8eea1bc07fe-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-gk9fh\" (UID: \"8a7a4716-62b7-42e6-ac08-a8eea1bc07fe\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-gk9fh" Dec 16 13:14:14.053718 kubelet[2772]: I1216 13:14:14.053178 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqhdj\" (UniqueName: \"kubernetes.io/projected/8a7a4716-62b7-42e6-ac08-a8eea1bc07fe-kube-api-access-cqhdj\") pod \"tigera-operator-65cdcdfd6d-gk9fh\" (UID: \"8a7a4716-62b7-42e6-ac08-a8eea1bc07fe\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-gk9fh" Dec 16 13:14:14.332394 containerd[1515]: time="2025-12-16T13:14:14.332081681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-gk9fh,Uid:8a7a4716-62b7-42e6-ac08-a8eea1bc07fe,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:14:14.361938 containerd[1515]: time="2025-12-16T13:14:14.361498183Z" level=info msg="connecting to shim ab41cbd143724e99f61fe50b9e0287befb49ae2a3758b12475d7a05f3882d8a5" address="unix:///run/containerd/s/4914224c3d887efa9283f54726c5313bd398a469b36ab4b2a3f9015c21c00c69" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:14.404196 systemd[1]: Started cri-containerd-ab41cbd143724e99f61fe50b9e0287befb49ae2a3758b12475d7a05f3882d8a5.scope - libcontainer container ab41cbd143724e99f61fe50b9e0287befb49ae2a3758b12475d7a05f3882d8a5. Dec 16 13:14:14.405740 containerd[1515]: time="2025-12-16T13:14:14.405661115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-67w48,Uid:de4568aa-ce71-4c3d-bede-6c2f88f451ed,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:14.442193 containerd[1515]: time="2025-12-16T13:14:14.442133079Z" level=info msg="connecting to shim 01fbe56fce2bcee575cef7332a5389a43157a53c4a1ff168876f345b1a413ff1" address="unix:///run/containerd/s/e222d8d77fb7eb8e11c01d53cf2dd88d6c640fb6a2a07eab6ab9b00bec781fe7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:14.481328 systemd[1]: Started cri-containerd-01fbe56fce2bcee575cef7332a5389a43157a53c4a1ff168876f345b1a413ff1.scope - libcontainer container 01fbe56fce2bcee575cef7332a5389a43157a53c4a1ff168876f345b1a413ff1. Dec 16 13:14:14.537227 containerd[1515]: time="2025-12-16T13:14:14.536599431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-gk9fh,Uid:8a7a4716-62b7-42e6-ac08-a8eea1bc07fe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ab41cbd143724e99f61fe50b9e0287befb49ae2a3758b12475d7a05f3882d8a5\"" Dec 16 13:14:14.543047 containerd[1515]: time="2025-12-16T13:14:14.542079149Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:14:14.546943 containerd[1515]: time="2025-12-16T13:14:14.546855988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-67w48,Uid:de4568aa-ce71-4c3d-bede-6c2f88f451ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"01fbe56fce2bcee575cef7332a5389a43157a53c4a1ff168876f345b1a413ff1\"" Dec 16 13:14:14.556406 containerd[1515]: time="2025-12-16T13:14:14.556359768Z" level=info msg="CreateContainer within sandbox \"01fbe56fce2bcee575cef7332a5389a43157a53c4a1ff168876f345b1a413ff1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:14:14.567675 containerd[1515]: time="2025-12-16T13:14:14.567632880Z" level=info msg="Container 1178b9c90938f4ac793e08f6d8bffa208390b6e6fd89d37060f037fdced38ce1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:14.576530 containerd[1515]: time="2025-12-16T13:14:14.576461967Z" level=info msg="CreateContainer within sandbox \"01fbe56fce2bcee575cef7332a5389a43157a53c4a1ff168876f345b1a413ff1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1178b9c90938f4ac793e08f6d8bffa208390b6e6fd89d37060f037fdced38ce1\"" Dec 16 13:14:14.577928 containerd[1515]: time="2025-12-16T13:14:14.577717692Z" level=info msg="StartContainer for \"1178b9c90938f4ac793e08f6d8bffa208390b6e6fd89d37060f037fdced38ce1\"" Dec 16 13:14:14.584527 containerd[1515]: time="2025-12-16T13:14:14.584385678Z" level=info msg="connecting to shim 1178b9c90938f4ac793e08f6d8bffa208390b6e6fd89d37060f037fdced38ce1" address="unix:///run/containerd/s/e222d8d77fb7eb8e11c01d53cf2dd88d6c640fb6a2a07eab6ab9b00bec781fe7" protocol=ttrpc version=3 Dec 16 13:14:14.622225 systemd[1]: Started cri-containerd-1178b9c90938f4ac793e08f6d8bffa208390b6e6fd89d37060f037fdced38ce1.scope - libcontainer container 1178b9c90938f4ac793e08f6d8bffa208390b6e6fd89d37060f037fdced38ce1. Dec 16 13:14:14.708694 containerd[1515]: time="2025-12-16T13:14:14.708461723Z" level=info msg="StartContainer for \"1178b9c90938f4ac793e08f6d8bffa208390b6e6fd89d37060f037fdced38ce1\" returns successfully" Dec 16 13:14:14.983771 kubelet[2772]: I1216 13:14:14.983447 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-67w48" podStartSLOduration=1.9833329910000002 podStartE2EDuration="1.983332991s" podCreationTimestamp="2025-12-16 13:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:14.983154259 +0000 UTC m=+6.358484876" watchObservedRunningTime="2025-12-16 13:14:14.983332991 +0000 UTC m=+6.358663609" Dec 16 13:14:15.700631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286173400.mount: Deactivated successfully. Dec 16 13:14:16.433050 update_engine[1506]: I20251216 13:14:16.432966 1506 update_attempter.cc:509] Updating boot flags... Dec 16 13:14:17.088550 containerd[1515]: time="2025-12-16T13:14:17.088481185Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:17.091176 containerd[1515]: time="2025-12-16T13:14:17.091113193Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 16 13:14:17.093932 containerd[1515]: time="2025-12-16T13:14:17.092948310Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:17.100538 containerd[1515]: time="2025-12-16T13:14:17.100486045Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:17.103051 containerd[1515]: time="2025-12-16T13:14:17.101947147Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.558852758s" Dec 16 13:14:17.103051 containerd[1515]: time="2025-12-16T13:14:17.102002994Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:14:17.110616 containerd[1515]: time="2025-12-16T13:14:17.110548068Z" level=info msg="CreateContainer within sandbox \"ab41cbd143724e99f61fe50b9e0287befb49ae2a3758b12475d7a05f3882d8a5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:14:17.122758 containerd[1515]: time="2025-12-16T13:14:17.120255528Z" level=info msg="Container 335d01f028651cf12607138cf2712e2e88f724bd2d853318ee44a7bb589b95e7: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:17.135268 containerd[1515]: time="2025-12-16T13:14:17.135197671Z" level=info msg="CreateContainer within sandbox \"ab41cbd143724e99f61fe50b9e0287befb49ae2a3758b12475d7a05f3882d8a5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"335d01f028651cf12607138cf2712e2e88f724bd2d853318ee44a7bb589b95e7\"" Dec 16 13:14:17.136228 containerd[1515]: time="2025-12-16T13:14:17.136050968Z" level=info msg="StartContainer for \"335d01f028651cf12607138cf2712e2e88f724bd2d853318ee44a7bb589b95e7\"" Dec 16 13:14:17.138259 containerd[1515]: time="2025-12-16T13:14:17.138200443Z" level=info msg="connecting to shim 335d01f028651cf12607138cf2712e2e88f724bd2d853318ee44a7bb589b95e7" address="unix:///run/containerd/s/4914224c3d887efa9283f54726c5313bd398a469b36ab4b2a3f9015c21c00c69" protocol=ttrpc version=3 Dec 16 13:14:17.171254 systemd[1]: Started cri-containerd-335d01f028651cf12607138cf2712e2e88f724bd2d853318ee44a7bb589b95e7.scope - libcontainer container 335d01f028651cf12607138cf2712e2e88f724bd2d853318ee44a7bb589b95e7. Dec 16 13:14:17.216470 containerd[1515]: time="2025-12-16T13:14:17.216401035Z" level=info msg="StartContainer for \"335d01f028651cf12607138cf2712e2e88f724bd2d853318ee44a7bb589b95e7\" returns successfully" Dec 16 13:14:18.010117 kubelet[2772]: I1216 13:14:18.010034 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-gk9fh" podStartSLOduration=2.447466468 podStartE2EDuration="5.010007654s" podCreationTimestamp="2025-12-16 13:14:13 +0000 UTC" firstStartedPulling="2025-12-16 13:14:14.541442525 +0000 UTC m=+5.916773117" lastFinishedPulling="2025-12-16 13:14:17.10398371 +0000 UTC m=+8.479314303" observedRunningTime="2025-12-16 13:14:17.994106856 +0000 UTC m=+9.369437472" watchObservedRunningTime="2025-12-16 13:14:18.010007654 +0000 UTC m=+9.385338272" Dec 16 13:14:22.699656 sudo[1831]: pam_unix(sudo:session): session closed for user root Dec 16 13:14:22.745295 sshd[1830]: Connection closed by 139.178.68.195 port 39510 Dec 16 13:14:22.746262 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:22.759303 systemd[1]: sshd@6-10.128.0.75:22-139.178.68.195:39510.service: Deactivated successfully. Dec 16 13:14:22.759973 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:14:22.767567 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:14:22.768971 systemd[1]: session-7.scope: Consumed 8.364s CPU time, 235.3M memory peak. Dec 16 13:14:22.779296 systemd-logind[1505]: Removed session 7. Dec 16 13:14:30.541281 systemd[1]: Created slice kubepods-besteffort-pode1eb3f64_f13f_4bf6_91f8_6242f6dd9661.slice - libcontainer container kubepods-besteffort-pode1eb3f64_f13f_4bf6_91f8_6242f6dd9661.slice. Dec 16 13:14:30.563623 kubelet[2772]: I1216 13:14:30.563559 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdf9\" (UniqueName: \"kubernetes.io/projected/e1eb3f64-f13f-4bf6-91f8-6242f6dd9661-kube-api-access-lmdf9\") pod \"calico-typha-56fff44b46-mwqml\" (UID: \"e1eb3f64-f13f-4bf6-91f8-6242f6dd9661\") " pod="calico-system/calico-typha-56fff44b46-mwqml" Dec 16 13:14:30.564235 kubelet[2772]: I1216 13:14:30.563635 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e1eb3f64-f13f-4bf6-91f8-6242f6dd9661-typha-certs\") pod \"calico-typha-56fff44b46-mwqml\" (UID: \"e1eb3f64-f13f-4bf6-91f8-6242f6dd9661\") " pod="calico-system/calico-typha-56fff44b46-mwqml" Dec 16 13:14:30.564235 kubelet[2772]: I1216 13:14:30.563663 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1eb3f64-f13f-4bf6-91f8-6242f6dd9661-tigera-ca-bundle\") pod \"calico-typha-56fff44b46-mwqml\" (UID: \"e1eb3f64-f13f-4bf6-91f8-6242f6dd9661\") " pod="calico-system/calico-typha-56fff44b46-mwqml" Dec 16 13:14:30.707523 systemd[1]: Created slice kubepods-besteffort-pod2942ef05_6f5c_4d21_b8b1_80778118c363.slice - libcontainer container kubepods-besteffort-pod2942ef05_6f5c_4d21_b8b1_80778118c363.slice. Dec 16 13:14:30.765375 kubelet[2772]: I1216 13:14:30.764308 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jht8k\" (UniqueName: \"kubernetes.io/projected/2942ef05-6f5c-4d21-b8b1-80778118c363-kube-api-access-jht8k\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.765375 kubelet[2772]: I1216 13:14:30.764553 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2942ef05-6f5c-4d21-b8b1-80778118c363-cni-net-dir\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.765375 kubelet[2772]: I1216 13:14:30.764587 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2942ef05-6f5c-4d21-b8b1-80778118c363-flexvol-driver-host\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.765375 kubelet[2772]: I1216 13:14:30.764615 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2942ef05-6f5c-4d21-b8b1-80778118c363-var-lib-calico\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.765375 kubelet[2772]: I1216 13:14:30.764646 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2942ef05-6f5c-4d21-b8b1-80778118c363-var-run-calico\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.765771 kubelet[2772]: I1216 13:14:30.764672 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2942ef05-6f5c-4d21-b8b1-80778118c363-cni-bin-dir\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.765771 kubelet[2772]: I1216 13:14:30.764698 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2942ef05-6f5c-4d21-b8b1-80778118c363-policysync\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.765771 kubelet[2772]: I1216 13:14:30.764721 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2942ef05-6f5c-4d21-b8b1-80778118c363-xtables-lock\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.765771 kubelet[2772]: I1216 13:14:30.764747 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2942ef05-6f5c-4d21-b8b1-80778118c363-cni-log-dir\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.765771 kubelet[2772]: I1216 13:14:30.764773 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2942ef05-6f5c-4d21-b8b1-80778118c363-node-certs\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.766032 kubelet[2772]: I1216 13:14:30.764796 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2942ef05-6f5c-4d21-b8b1-80778118c363-lib-modules\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.766032 kubelet[2772]: I1216 13:14:30.764830 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2942ef05-6f5c-4d21-b8b1-80778118c363-tigera-ca-bundle\") pod \"calico-node-f5pl7\" (UID: \"2942ef05-6f5c-4d21-b8b1-80778118c363\") " pod="calico-system/calico-node-f5pl7" Dec 16 13:14:30.827022 kubelet[2772]: E1216 13:14:30.825316 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:14:30.855526 containerd[1515]: time="2025-12-16T13:14:30.855456183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56fff44b46-mwqml,Uid:e1eb3f64-f13f-4bf6-91f8-6242f6dd9661,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:30.866035 kubelet[2772]: I1216 13:14:30.865969 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7531109-4d4b-4a8c-8928-f52ad25f6b55-kubelet-dir\") pod \"csi-node-driver-z5rm2\" (UID: \"a7531109-4d4b-4a8c-8928-f52ad25f6b55\") " pod="calico-system/csi-node-driver-z5rm2" Dec 16 13:14:30.867306 kubelet[2772]: I1216 13:14:30.867215 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7531109-4d4b-4a8c-8928-f52ad25f6b55-registration-dir\") pod \"csi-node-driver-z5rm2\" (UID: \"a7531109-4d4b-4a8c-8928-f52ad25f6b55\") " pod="calico-system/csi-node-driver-z5rm2" Dec 16 13:14:30.867306 kubelet[2772]: I1216 13:14:30.867301 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdl6m\" (UniqueName: \"kubernetes.io/projected/a7531109-4d4b-4a8c-8928-f52ad25f6b55-kube-api-access-cdl6m\") pod \"csi-node-driver-z5rm2\" (UID: \"a7531109-4d4b-4a8c-8928-f52ad25f6b55\") " pod="calico-system/csi-node-driver-z5rm2" Dec 16 13:14:30.867498 kubelet[2772]: I1216 13:14:30.867411 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7531109-4d4b-4a8c-8928-f52ad25f6b55-socket-dir\") pod \"csi-node-driver-z5rm2\" (UID: \"a7531109-4d4b-4a8c-8928-f52ad25f6b55\") " pod="calico-system/csi-node-driver-z5rm2" Dec 16 13:14:30.867498 kubelet[2772]: I1216 13:14:30.867438 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a7531109-4d4b-4a8c-8928-f52ad25f6b55-varrun\") pod \"csi-node-driver-z5rm2\" (UID: \"a7531109-4d4b-4a8c-8928-f52ad25f6b55\") " pod="calico-system/csi-node-driver-z5rm2" Dec 16 13:14:30.871194 kubelet[2772]: E1216 13:14:30.870070 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.871194 kubelet[2772]: W1216 13:14:30.870097 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.871194 kubelet[2772]: E1216 13:14:30.870124 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.871194 kubelet[2772]: E1216 13:14:30.871025 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.871194 kubelet[2772]: W1216 13:14:30.871042 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.871194 kubelet[2772]: E1216 13:14:30.871062 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.871609 kubelet[2772]: E1216 13:14:30.871356 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.871609 kubelet[2772]: W1216 13:14:30.871399 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.871609 kubelet[2772]: E1216 13:14:30.871417 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.871766 kubelet[2772]: E1216 13:14:30.871717 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.871766 kubelet[2772]: W1216 13:14:30.871729 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.871766 kubelet[2772]: E1216 13:14:30.871745 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.873471 kubelet[2772]: E1216 13:14:30.873236 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.873471 kubelet[2772]: W1216 13:14:30.873255 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.873471 kubelet[2772]: E1216 13:14:30.873274 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.876561 kubelet[2772]: E1216 13:14:30.876188 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.876561 kubelet[2772]: W1216 13:14:30.876212 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.876561 kubelet[2772]: E1216 13:14:30.876233 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.876561 kubelet[2772]: E1216 13:14:30.876513 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.876561 kubelet[2772]: W1216 13:14:30.876524 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.876561 kubelet[2772]: E1216 13:14:30.876540 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.876947 kubelet[2772]: E1216 13:14:30.876797 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.876947 kubelet[2772]: W1216 13:14:30.876810 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.876947 kubelet[2772]: E1216 13:14:30.876824 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.891629 kubelet[2772]: E1216 13:14:30.891586 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.891629 kubelet[2772]: W1216 13:14:30.891632 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.891846 kubelet[2772]: E1216 13:14:30.891664 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.904484 kubelet[2772]: E1216 13:14:30.904024 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.904891 kubelet[2772]: W1216 13:14:30.904682 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.904891 kubelet[2772]: E1216 13:14:30.904724 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.905851 kubelet[2772]: E1216 13:14:30.905687 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.905851 kubelet[2772]: W1216 13:14:30.905706 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.905851 kubelet[2772]: E1216 13:14:30.905729 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.911249 kubelet[2772]: E1216 13:14:30.909085 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.911249 kubelet[2772]: W1216 13:14:30.909110 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.911249 kubelet[2772]: E1216 13:14:30.909136 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.911249 kubelet[2772]: E1216 13:14:30.911041 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.911249 kubelet[2772]: W1216 13:14:30.911058 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.911249 kubelet[2772]: E1216 13:14:30.911079 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.931932 kubelet[2772]: E1216 13:14:30.931578 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.931932 kubelet[2772]: W1216 13:14:30.931609 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.931932 kubelet[2772]: E1216 13:14:30.931656 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.938521 containerd[1515]: time="2025-12-16T13:14:30.938208203Z" level=info msg="connecting to shim 7eb027f48e05ec7e2fe6f941f7aa779847f063e41add2d53f8f13d90451fcff4" address="unix:///run/containerd/s/2ba7ceef5668d2ca205fd96ad9a888a32684b341e07a007f04d1fa5b73dfbf9f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:30.950014 kubelet[2772]: E1216 13:14:30.949732 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.950014 kubelet[2772]: W1216 13:14:30.949766 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.950014 kubelet[2772]: E1216 13:14:30.949798 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.969826 kubelet[2772]: E1216 13:14:30.969706 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.970292 kubelet[2772]: W1216 13:14:30.970114 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.970292 kubelet[2772]: E1216 13:14:30.970158 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.973433 kubelet[2772]: E1216 13:14:30.973355 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.973433 kubelet[2772]: W1216 13:14:30.973378 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.973433 kubelet[2772]: E1216 13:14:30.973408 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.975262 kubelet[2772]: E1216 13:14:30.975038 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.975262 kubelet[2772]: W1216 13:14:30.975069 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.975262 kubelet[2772]: E1216 13:14:30.975090 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.975949 kubelet[2772]: E1216 13:14:30.975766 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.975949 kubelet[2772]: W1216 13:14:30.975785 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.975949 kubelet[2772]: E1216 13:14:30.975803 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.980116 kubelet[2772]: E1216 13:14:30.977853 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.980116 kubelet[2772]: W1216 13:14:30.977872 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.980116 kubelet[2772]: E1216 13:14:30.977890 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.980491 kubelet[2772]: E1216 13:14:30.980422 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.980491 kubelet[2772]: W1216 13:14:30.980441 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.980491 kubelet[2772]: E1216 13:14:30.980461 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.984838 kubelet[2772]: E1216 13:14:30.982984 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.984838 kubelet[2772]: W1216 13:14:30.983945 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.984838 kubelet[2772]: E1216 13:14:30.983979 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.986008 kubelet[2772]: E1216 13:14:30.985986 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.986254 kubelet[2772]: W1216 13:14:30.986233 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.986363 kubelet[2772]: E1216 13:14:30.986348 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.987469 kubelet[2772]: E1216 13:14:30.987446 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.987644 kubelet[2772]: W1216 13:14:30.987622 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.987822 kubelet[2772]: E1216 13:14:30.987774 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.995499 kubelet[2772]: E1216 13:14:30.995020 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.995499 kubelet[2772]: W1216 13:14:30.995046 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.995499 kubelet[2772]: E1216 13:14:30.995069 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.996026 kubelet[2772]: E1216 13:14:30.995962 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.996026 kubelet[2772]: W1216 13:14:30.995982 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.996026 kubelet[2772]: E1216 13:14:30.996003 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.997334 kubelet[2772]: E1216 13:14:30.997312 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.997682 kubelet[2772]: W1216 13:14:30.997634 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.997682 kubelet[2772]: E1216 13:14:30.997662 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:30.999661 kubelet[2772]: E1216 13:14:30.999626 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:30.999997 kubelet[2772]: W1216 13:14:30.999819 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:30.999997 kubelet[2772]: E1216 13:14:30.999843 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.001135 kubelet[2772]: E1216 13:14:31.001115 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.002092 kubelet[2772]: W1216 13:14:31.001942 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.002092 kubelet[2772]: E1216 13:14:31.001977 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.003938 kubelet[2772]: E1216 13:14:31.002862 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.003938 kubelet[2772]: W1216 13:14:31.002883 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.004950 kubelet[2772]: E1216 13:14:31.004098 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.007027 kubelet[2772]: E1216 13:14:31.006797 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.007027 kubelet[2772]: W1216 13:14:31.006823 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.007027 kubelet[2772]: E1216 13:14:31.006846 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.008418 kubelet[2772]: E1216 13:14:31.008397 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.008552 kubelet[2772]: W1216 13:14:31.008536 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.009246 kubelet[2772]: E1216 13:14:31.008954 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.010870 kubelet[2772]: E1216 13:14:31.010847 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.011324 kubelet[2772]: W1216 13:14:31.011171 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.011324 kubelet[2772]: E1216 13:14:31.011199 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.015212 kubelet[2772]: E1216 13:14:31.013848 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.015212 kubelet[2772]: W1216 13:14:31.013875 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.015212 kubelet[2772]: E1216 13:14:31.013897 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.015507 kubelet[2772]: E1216 13:14:31.015489 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.016588 kubelet[2772]: W1216 13:14:31.015747 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.016588 kubelet[2772]: E1216 13:14:31.015775 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.017116 kubelet[2772]: E1216 13:14:31.017094 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.019538 containerd[1515]: time="2025-12-16T13:14:31.019217816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f5pl7,Uid:2942ef05-6f5c-4d21-b8b1-80778118c363,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:31.019647 kubelet[2772]: W1216 13:14:31.017258 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.019647 kubelet[2772]: E1216 13:14:31.019292 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.020935 kubelet[2772]: E1216 13:14:31.019882 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.020935 kubelet[2772]: W1216 13:14:31.019900 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.020935 kubelet[2772]: E1216 13:14:31.019940 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.021617 kubelet[2772]: E1216 13:14:31.021597 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.023997 kubelet[2772]: W1216 13:14:31.021747 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.023997 kubelet[2772]: E1216 13:14:31.021773 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.024891 kubelet[2772]: E1216 13:14:31.024631 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.024891 kubelet[2772]: W1216 13:14:31.024653 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.024891 kubelet[2772]: E1216 13:14:31.024684 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.028944 kubelet[2772]: E1216 13:14:31.027013 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.028944 kubelet[2772]: W1216 13:14:31.027035 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.028944 kubelet[2772]: E1216 13:14:31.027059 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.062187 systemd[1]: Started cri-containerd-7eb027f48e05ec7e2fe6f941f7aa779847f063e41add2d53f8f13d90451fcff4.scope - libcontainer container 7eb027f48e05ec7e2fe6f941f7aa779847f063e41add2d53f8f13d90451fcff4. Dec 16 13:14:31.077690 kubelet[2772]: E1216 13:14:31.077148 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:31.077690 kubelet[2772]: W1216 13:14:31.077190 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:31.077690 kubelet[2772]: E1216 13:14:31.077221 2772 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:31.091163 containerd[1515]: time="2025-12-16T13:14:31.090989485Z" level=info msg="connecting to shim c4838ba2730a2bcf4b1c392b5d98d68e510d72e0ad241d308d59a1d9618abe21" address="unix:///run/containerd/s/e4d5d701ddcd7070ab7298e1335ff60b64ed3a5eedbd9f2ddb3e8249278f9ac2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:31.144940 systemd[1]: Started cri-containerd-c4838ba2730a2bcf4b1c392b5d98d68e510d72e0ad241d308d59a1d9618abe21.scope - libcontainer container c4838ba2730a2bcf4b1c392b5d98d68e510d72e0ad241d308d59a1d9618abe21. Dec 16 13:14:31.221262 containerd[1515]: time="2025-12-16T13:14:31.221107477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f5pl7,Uid:2942ef05-6f5c-4d21-b8b1-80778118c363,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4838ba2730a2bcf4b1c392b5d98d68e510d72e0ad241d308d59a1d9618abe21\"" Dec 16 13:14:31.226104 containerd[1515]: time="2025-12-16T13:14:31.226030736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:14:31.277731 containerd[1515]: time="2025-12-16T13:14:31.277678790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56fff44b46-mwqml,Uid:e1eb3f64-f13f-4bf6-91f8-6242f6dd9661,Namespace:calico-system,Attempt:0,} returns sandbox id \"7eb027f48e05ec7e2fe6f941f7aa779847f063e41add2d53f8f13d90451fcff4\"" Dec 16 13:14:32.138245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842005548.mount: Deactivated successfully. Dec 16 13:14:32.284732 containerd[1515]: time="2025-12-16T13:14:32.284643241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:32.286492 containerd[1515]: time="2025-12-16T13:14:32.286398511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Dec 16 13:14:32.289060 containerd[1515]: time="2025-12-16T13:14:32.289014503Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:32.293403 containerd[1515]: time="2025-12-16T13:14:32.292392741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:32.293403 containerd[1515]: time="2025-12-16T13:14:32.293331840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.067217911s" Dec 16 13:14:32.293403 containerd[1515]: time="2025-12-16T13:14:32.293374710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:14:32.295403 containerd[1515]: time="2025-12-16T13:14:32.295361396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:14:32.301210 containerd[1515]: time="2025-12-16T13:14:32.301170815Z" level=info msg="CreateContainer within sandbox \"c4838ba2730a2bcf4b1c392b5d98d68e510d72e0ad241d308d59a1d9618abe21\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:14:32.316930 containerd[1515]: time="2025-12-16T13:14:32.312283241Z" level=info msg="Container 74565d2cb7318f71ee01ef767e7e4243c99fbf32d95466094a34954564c920f6: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:32.329400 containerd[1515]: time="2025-12-16T13:14:32.329326935Z" level=info msg="CreateContainer within sandbox \"c4838ba2730a2bcf4b1c392b5d98d68e510d72e0ad241d308d59a1d9618abe21\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"74565d2cb7318f71ee01ef767e7e4243c99fbf32d95466094a34954564c920f6\"" Dec 16 13:14:32.331037 containerd[1515]: time="2025-12-16T13:14:32.330992365Z" level=info msg="StartContainer for \"74565d2cb7318f71ee01ef767e7e4243c99fbf32d95466094a34954564c920f6\"" Dec 16 13:14:32.333343 containerd[1515]: time="2025-12-16T13:14:32.333299058Z" level=info msg="connecting to shim 74565d2cb7318f71ee01ef767e7e4243c99fbf32d95466094a34954564c920f6" address="unix:///run/containerd/s/e4d5d701ddcd7070ab7298e1335ff60b64ed3a5eedbd9f2ddb3e8249278f9ac2" protocol=ttrpc version=3 Dec 16 13:14:32.377442 systemd[1]: Started cri-containerd-74565d2cb7318f71ee01ef767e7e4243c99fbf32d95466094a34954564c920f6.scope - libcontainer container 74565d2cb7318f71ee01ef767e7e4243c99fbf32d95466094a34954564c920f6. Dec 16 13:14:32.495458 containerd[1515]: time="2025-12-16T13:14:32.495107184Z" level=info msg="StartContainer for \"74565d2cb7318f71ee01ef767e7e4243c99fbf32d95466094a34954564c920f6\" returns successfully" Dec 16 13:14:32.517421 systemd[1]: cri-containerd-74565d2cb7318f71ee01ef767e7e4243c99fbf32d95466094a34954564c920f6.scope: Deactivated successfully. Dec 16 13:14:32.525872 containerd[1515]: time="2025-12-16T13:14:32.525799149Z" level=info msg="received container exit event container_id:\"74565d2cb7318f71ee01ef767e7e4243c99fbf32d95466094a34954564c920f6\" id:\"74565d2cb7318f71ee01ef767e7e4243c99fbf32d95466094a34954564c920f6\" pid:3357 exited_at:{seconds:1765890872 nanos:525375804}" Dec 16 13:14:32.889970 kubelet[2772]: E1216 13:14:32.889235 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:14:34.646850 containerd[1515]: time="2025-12-16T13:14:34.646785142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:34.648087 containerd[1515]: time="2025-12-16T13:14:34.647900076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Dec 16 13:14:34.649997 containerd[1515]: time="2025-12-16T13:14:34.649495589Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:34.652715 containerd[1515]: time="2025-12-16T13:14:34.652671340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:34.653969 containerd[1515]: time="2025-12-16T13:14:34.653889503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.358479067s" Dec 16 13:14:34.654154 containerd[1515]: time="2025-12-16T13:14:34.654124534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:14:34.656322 containerd[1515]: time="2025-12-16T13:14:34.656286378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:14:34.680202 containerd[1515]: time="2025-12-16T13:14:34.680149973Z" level=info msg="CreateContainer within sandbox \"7eb027f48e05ec7e2fe6f941f7aa779847f063e41add2d53f8f13d90451fcff4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:14:34.693177 containerd[1515]: time="2025-12-16T13:14:34.693119852Z" level=info msg="Container 1c7959a7f7beb79128e5e352a3763e410f5b5d75c3f711d54408dbe852337acc: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:34.709936 containerd[1515]: time="2025-12-16T13:14:34.709086201Z" level=info msg="CreateContainer within sandbox \"7eb027f48e05ec7e2fe6f941f7aa779847f063e41add2d53f8f13d90451fcff4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1c7959a7f7beb79128e5e352a3763e410f5b5d75c3f711d54408dbe852337acc\"" Dec 16 13:14:34.713804 containerd[1515]: time="2025-12-16T13:14:34.712637550Z" level=info msg="StartContainer for \"1c7959a7f7beb79128e5e352a3763e410f5b5d75c3f711d54408dbe852337acc\"" Dec 16 13:14:34.719276 containerd[1515]: time="2025-12-16T13:14:34.719180767Z" level=info msg="connecting to shim 1c7959a7f7beb79128e5e352a3763e410f5b5d75c3f711d54408dbe852337acc" address="unix:///run/containerd/s/2ba7ceef5668d2ca205fd96ad9a888a32684b341e07a007f04d1fa5b73dfbf9f" protocol=ttrpc version=3 Dec 16 13:14:34.765251 systemd[1]: Started cri-containerd-1c7959a7f7beb79128e5e352a3763e410f5b5d75c3f711d54408dbe852337acc.scope - libcontainer container 1c7959a7f7beb79128e5e352a3763e410f5b5d75c3f711d54408dbe852337acc. Dec 16 13:14:34.850792 containerd[1515]: time="2025-12-16T13:14:34.850745473Z" level=info msg="StartContainer for \"1c7959a7f7beb79128e5e352a3763e410f5b5d75c3f711d54408dbe852337acc\" returns successfully" Dec 16 13:14:34.889257 kubelet[2772]: E1216 13:14:34.889175 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:14:36.087784 kubelet[2772]: I1216 13:14:36.087657 2772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:14:36.896149 kubelet[2772]: E1216 13:14:36.894258 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:14:37.840512 containerd[1515]: time="2025-12-16T13:14:37.840439735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:37.841679 containerd[1515]: time="2025-12-16T13:14:37.841640938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 13:14:37.843951 containerd[1515]: time="2025-12-16T13:14:37.842774816Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:37.845755 containerd[1515]: time="2025-12-16T13:14:37.845687015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:37.846865 containerd[1515]: time="2025-12-16T13:14:37.846828414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.190496278s" Dec 16 13:14:37.847060 containerd[1515]: time="2025-12-16T13:14:37.847029870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:14:37.853165 containerd[1515]: time="2025-12-16T13:14:37.853118612Z" level=info msg="CreateContainer within sandbox \"c4838ba2730a2bcf4b1c392b5d98d68e510d72e0ad241d308d59a1d9618abe21\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:14:37.863954 containerd[1515]: time="2025-12-16T13:14:37.862653367Z" level=info msg="Container e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:37.880242 containerd[1515]: time="2025-12-16T13:14:37.880179203Z" level=info msg="CreateContainer within sandbox \"c4838ba2730a2bcf4b1c392b5d98d68e510d72e0ad241d308d59a1d9618abe21\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c\"" Dec 16 13:14:37.881963 containerd[1515]: time="2025-12-16T13:14:37.881088893Z" level=info msg="StartContainer for \"e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c\"" Dec 16 13:14:37.883487 containerd[1515]: time="2025-12-16T13:14:37.883442341Z" level=info msg="connecting to shim e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c" address="unix:///run/containerd/s/e4d5d701ddcd7070ab7298e1335ff60b64ed3a5eedbd9f2ddb3e8249278f9ac2" protocol=ttrpc version=3 Dec 16 13:14:37.919134 systemd[1]: Started cri-containerd-e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c.scope - libcontainer container e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c. Dec 16 13:14:38.025667 containerd[1515]: time="2025-12-16T13:14:38.025617521Z" level=info msg="StartContainer for \"e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c\" returns successfully" Dec 16 13:14:38.137425 kubelet[2772]: I1216 13:14:38.137259 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56fff44b46-mwqml" podStartSLOduration=4.761669131 podStartE2EDuration="8.137202356s" podCreationTimestamp="2025-12-16 13:14:30 +0000 UTC" firstStartedPulling="2025-12-16 13:14:31.279734343 +0000 UTC m=+22.655064933" lastFinishedPulling="2025-12-16 13:14:34.655267544 +0000 UTC m=+26.030598158" observedRunningTime="2025-12-16 13:14:35.116061038 +0000 UTC m=+26.491391656" watchObservedRunningTime="2025-12-16 13:14:38.137202356 +0000 UTC m=+29.512532974" Dec 16 13:14:38.892611 kubelet[2772]: E1216 13:14:38.892532 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:14:39.099863 containerd[1515]: time="2025-12-16T13:14:39.099596404Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:14:39.102317 systemd[1]: cri-containerd-e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c.scope: Deactivated successfully. Dec 16 13:14:39.102746 systemd[1]: cri-containerd-e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c.scope: Consumed 715ms CPU time, 195M memory peak, 171.3M written to disk. Dec 16 13:14:39.111487 containerd[1515]: time="2025-12-16T13:14:39.111417310Z" level=info msg="received container exit event container_id:\"e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c\" id:\"e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c\" pid:3459 exited_at:{seconds:1765890879 nanos:109122950}" Dec 16 13:14:39.145819 kubelet[2772]: I1216 13:14:39.145503 2772 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:14:39.186658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7ad2ce7e36657734b106f020302d92c780250ca6ed61665daeb86fb8a2dd58c-rootfs.mount: Deactivated successfully. Dec 16 13:14:39.264677 systemd[1]: Created slice kubepods-burstable-pod511ccaed_67b8_4172_ab54_38134ffd645e.slice - libcontainer container kubepods-burstable-pod511ccaed_67b8_4172_ab54_38134ffd645e.slice. Dec 16 13:14:39.377157 kubelet[2772]: I1216 13:14:39.377092 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nfj4\" (UniqueName: \"kubernetes.io/projected/511ccaed-67b8-4172-ab54-38134ffd645e-kube-api-access-9nfj4\") pod \"coredns-66bc5c9577-vb5hx\" (UID: \"511ccaed-67b8-4172-ab54-38134ffd645e\") " pod="kube-system/coredns-66bc5c9577-vb5hx" Dec 16 13:14:39.377157 kubelet[2772]: I1216 13:14:39.377157 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/511ccaed-67b8-4172-ab54-38134ffd645e-config-volume\") pod \"coredns-66bc5c9577-vb5hx\" (UID: \"511ccaed-67b8-4172-ab54-38134ffd645e\") " pod="kube-system/coredns-66bc5c9577-vb5hx" Dec 16 13:14:39.453146 systemd[1]: Created slice kubepods-burstable-podd899c118_add8_4703_b1b2_47b81276ce81.slice - libcontainer container kubepods-burstable-podd899c118_add8_4703_b1b2_47b81276ce81.slice. Dec 16 13:14:39.559380 systemd[1]: Created slice kubepods-besteffort-poda987f064_9fb2_45ff_bff7_ad75d97d3211.slice - libcontainer container kubepods-besteffort-poda987f064_9fb2_45ff_bff7_ad75d97d3211.slice. Dec 16 13:14:39.579333 kubelet[2772]: I1216 13:14:39.579267 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z482s\" (UniqueName: \"kubernetes.io/projected/d899c118-add8-4703-b1b2-47b81276ce81-kube-api-access-z482s\") pod \"coredns-66bc5c9577-jpx9n\" (UID: \"d899c118-add8-4703-b1b2-47b81276ce81\") " pod="kube-system/coredns-66bc5c9577-jpx9n" Dec 16 13:14:39.579333 kubelet[2772]: I1216 13:14:39.579332 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d899c118-add8-4703-b1b2-47b81276ce81-config-volume\") pod \"coredns-66bc5c9577-jpx9n\" (UID: \"d899c118-add8-4703-b1b2-47b81276ce81\") " pod="kube-system/coredns-66bc5c9577-jpx9n" Dec 16 13:14:39.680465 kubelet[2772]: I1216 13:14:39.680255 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a987f064-9fb2-45ff-bff7-ad75d97d3211-tigera-ca-bundle\") pod \"calico-kube-controllers-5c497476dd-q67hl\" (UID: \"a987f064-9fb2-45ff-bff7-ad75d97d3211\") " pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" Dec 16 13:14:39.766624 kubelet[2772]: I1216 13:14:39.680645 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpxzw\" (UniqueName: \"kubernetes.io/projected/a987f064-9fb2-45ff-bff7-ad75d97d3211-kube-api-access-hpxzw\") pod \"calico-kube-controllers-5c497476dd-q67hl\" (UID: \"a987f064-9fb2-45ff-bff7-ad75d97d3211\") " pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" Dec 16 13:14:39.786510 containerd[1515]: time="2025-12-16T13:14:39.786183176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vb5hx,Uid:511ccaed-67b8-4172-ab54-38134ffd645e,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:39.835009 systemd[1]: Created slice kubepods-besteffort-podf96341ec_7be6_4b46_b044_72e177b23759.slice - libcontainer container kubepods-besteffort-podf96341ec_7be6_4b46_b044_72e177b23759.slice. Dec 16 13:14:39.905057 kubelet[2772]: I1216 13:14:39.882749 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f96341ec-7be6-4b46-b044-72e177b23759-calico-apiserver-certs\") pod \"calico-apiserver-864c65cd8b-nvfvd\" (UID: \"f96341ec-7be6-4b46-b044-72e177b23759\") " pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" Dec 16 13:14:39.905057 kubelet[2772]: I1216 13:14:39.882817 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nh8l\" (UniqueName: \"kubernetes.io/projected/f96341ec-7be6-4b46-b044-72e177b23759-kube-api-access-4nh8l\") pod \"calico-apiserver-864c65cd8b-nvfvd\" (UID: \"f96341ec-7be6-4b46-b044-72e177b23759\") " pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" Dec 16 13:14:39.956887 systemd[1]: Created slice kubepods-besteffort-pod9ac13d5a_ae98_4968_bf5e_a2b621311d79.slice - libcontainer container kubepods-besteffort-pod9ac13d5a_ae98_4968_bf5e_a2b621311d79.slice. Dec 16 13:14:40.084395 kubelet[2772]: I1216 13:14:40.083864 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfr5d\" (UniqueName: \"kubernetes.io/projected/9ac13d5a-ae98-4968-bf5e-a2b621311d79-kube-api-access-mfr5d\") pod \"whisker-77dd9f964-7whrl\" (UID: \"9ac13d5a-ae98-4968-bf5e-a2b621311d79\") " pod="calico-system/whisker-77dd9f964-7whrl" Dec 16 13:14:40.084395 kubelet[2772]: I1216 13:14:40.083958 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ac13d5a-ae98-4968-bf5e-a2b621311d79-whisker-backend-key-pair\") pod \"whisker-77dd9f964-7whrl\" (UID: \"9ac13d5a-ae98-4968-bf5e-a2b621311d79\") " pod="calico-system/whisker-77dd9f964-7whrl" Dec 16 13:14:40.084395 kubelet[2772]: I1216 13:14:40.083986 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac13d5a-ae98-4968-bf5e-a2b621311d79-whisker-ca-bundle\") pod \"whisker-77dd9f964-7whrl\" (UID: \"9ac13d5a-ae98-4968-bf5e-a2b621311d79\") " pod="calico-system/whisker-77dd9f964-7whrl" Dec 16 13:14:40.087417 containerd[1515]: time="2025-12-16T13:14:40.087030010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jpx9n,Uid:d899c118-add8-4703-b1b2-47b81276ce81,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:40.110157 systemd[1]: Created slice kubepods-besteffort-podb169b293_70bb_47ce_9436_682d8a8e825b.slice - libcontainer container kubepods-besteffort-podb169b293_70bb_47ce_9436_682d8a8e825b.slice. Dec 16 13:14:40.145253 systemd[1]: Created slice kubepods-besteffort-pod92bcad1c_8640_450a_974e_84155204fa1a.slice - libcontainer container kubepods-besteffort-pod92bcad1c_8640_450a_974e_84155204fa1a.slice. Dec 16 13:14:40.169084 containerd[1515]: time="2025-12-16T13:14:40.168534645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c497476dd-q67hl,Uid:a987f064-9fb2-45ff-bff7-ad75d97d3211,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:40.179823 systemd[1]: Created slice kubepods-besteffort-podc5258db0_e1f3_4670_a2ed_9cdfef209601.slice - libcontainer container kubepods-besteffort-podc5258db0_e1f3_4670_a2ed_9cdfef209601.slice. Dec 16 13:14:40.205288 kubelet[2772]: I1216 13:14:40.202169 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7dhr\" (UniqueName: \"kubernetes.io/projected/b169b293-70bb-47ce-9436-682d8a8e825b-kube-api-access-t7dhr\") pod \"calico-apiserver-864c65cd8b-5wmt7\" (UID: \"b169b293-70bb-47ce-9436-682d8a8e825b\") " pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" Dec 16 13:14:40.207759 kubelet[2772]: I1216 13:14:40.207491 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c5258db0-e1f3-4670-a2ed-9cdfef209601-goldmane-key-pair\") pod \"goldmane-7c778bb748-sjcz4\" (UID: \"c5258db0-e1f3-4670-a2ed-9cdfef209601\") " pod="calico-system/goldmane-7c778bb748-sjcz4" Dec 16 13:14:40.213618 kubelet[2772]: I1216 13:14:40.212411 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/92bcad1c-8640-450a-974e-84155204fa1a-calico-apiserver-certs\") pod \"calico-apiserver-569f6b4df6-2lfjk\" (UID: \"92bcad1c-8640-450a-974e-84155204fa1a\") " pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" Dec 16 13:14:40.213618 kubelet[2772]: I1216 13:14:40.212463 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rtx5\" (UniqueName: \"kubernetes.io/projected/92bcad1c-8640-450a-974e-84155204fa1a-kube-api-access-9rtx5\") pod \"calico-apiserver-569f6b4df6-2lfjk\" (UID: \"92bcad1c-8640-450a-974e-84155204fa1a\") " pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" Dec 16 13:14:40.213618 kubelet[2772]: I1216 13:14:40.212498 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5258db0-e1f3-4670-a2ed-9cdfef209601-config\") pod \"goldmane-7c778bb748-sjcz4\" (UID: \"c5258db0-e1f3-4670-a2ed-9cdfef209601\") " pod="calico-system/goldmane-7c778bb748-sjcz4" Dec 16 13:14:40.213618 kubelet[2772]: I1216 13:14:40.212526 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5258db0-e1f3-4670-a2ed-9cdfef209601-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-sjcz4\" (UID: \"c5258db0-e1f3-4670-a2ed-9cdfef209601\") " pod="calico-system/goldmane-7c778bb748-sjcz4" Dec 16 13:14:40.213618 kubelet[2772]: I1216 13:14:40.212571 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b169b293-70bb-47ce-9436-682d8a8e825b-calico-apiserver-certs\") pod \"calico-apiserver-864c65cd8b-5wmt7\" (UID: \"b169b293-70bb-47ce-9436-682d8a8e825b\") " pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" Dec 16 13:14:40.214431 kubelet[2772]: I1216 13:14:40.212604 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdrs\" (UniqueName: \"kubernetes.io/projected/c5258db0-e1f3-4670-a2ed-9cdfef209601-kube-api-access-qcdrs\") pod \"goldmane-7c778bb748-sjcz4\" (UID: \"c5258db0-e1f3-4670-a2ed-9cdfef209601\") " pod="calico-system/goldmane-7c778bb748-sjcz4" Dec 16 13:14:40.221516 containerd[1515]: time="2025-12-16T13:14:40.221464688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-864c65cd8b-nvfvd,Uid:f96341ec-7be6-4b46-b044-72e177b23759,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:14:40.279657 containerd[1515]: time="2025-12-16T13:14:40.279574126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:14:40.454262 containerd[1515]: time="2025-12-16T13:14:40.454204270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-864c65cd8b-5wmt7,Uid:b169b293-70bb-47ce-9436-682d8a8e825b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:14:40.474092 containerd[1515]: time="2025-12-16T13:14:40.473613331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569f6b4df6-2lfjk,Uid:92bcad1c-8640-450a-974e-84155204fa1a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:14:40.534088 containerd[1515]: time="2025-12-16T13:14:40.534024832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sjcz4,Uid:c5258db0-e1f3-4670-a2ed-9cdfef209601,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:40.534731 containerd[1515]: time="2025-12-16T13:14:40.534663178Z" level=error msg="Failed to destroy network for sandbox \"fa36da2b7f133e42d8938c4fa1e217e96c42c5b968f219880fd1aef2c3344467\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.539593 containerd[1515]: time="2025-12-16T13:14:40.539525512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c497476dd-q67hl,Uid:a987f064-9fb2-45ff-bff7-ad75d97d3211,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa36da2b7f133e42d8938c4fa1e217e96c42c5b968f219880fd1aef2c3344467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.540177 kubelet[2772]: E1216 13:14:40.540114 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa36da2b7f133e42d8938c4fa1e217e96c42c5b968f219880fd1aef2c3344467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.541928 kubelet[2772]: E1216 13:14:40.541227 2772 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa36da2b7f133e42d8938c4fa1e217e96c42c5b968f219880fd1aef2c3344467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" Dec 16 13:14:40.541928 kubelet[2772]: E1216 13:14:40.541277 2772 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa36da2b7f133e42d8938c4fa1e217e96c42c5b968f219880fd1aef2c3344467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" Dec 16 13:14:40.541928 kubelet[2772]: E1216 13:14:40.541367 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c497476dd-q67hl_calico-system(a987f064-9fb2-45ff-bff7-ad75d97d3211)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c497476dd-q67hl_calico-system(a987f064-9fb2-45ff-bff7-ad75d97d3211)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa36da2b7f133e42d8938c4fa1e217e96c42c5b968f219880fd1aef2c3344467\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" podUID="a987f064-9fb2-45ff-bff7-ad75d97d3211" Dec 16 13:14:40.568252 containerd[1515]: time="2025-12-16T13:14:40.568187336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77dd9f964-7whrl,Uid:9ac13d5a-ae98-4968-bf5e-a2b621311d79,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:40.599140 containerd[1515]: time="2025-12-16T13:14:40.599078116Z" level=error msg="Failed to destroy network for sandbox \"3d11e399d3fcbdb962dfc8bd730e88241fd148d291c82b7a0755f2caa1f2892e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.601140 containerd[1515]: time="2025-12-16T13:14:40.601073901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vb5hx,Uid:511ccaed-67b8-4172-ab54-38134ffd645e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d11e399d3fcbdb962dfc8bd730e88241fd148d291c82b7a0755f2caa1f2892e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.602004 kubelet[2772]: E1216 13:14:40.601345 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d11e399d3fcbdb962dfc8bd730e88241fd148d291c82b7a0755f2caa1f2892e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.602004 kubelet[2772]: E1216 13:14:40.601430 2772 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d11e399d3fcbdb962dfc8bd730e88241fd148d291c82b7a0755f2caa1f2892e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vb5hx" Dec 16 13:14:40.602004 kubelet[2772]: E1216 13:14:40.601460 2772 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d11e399d3fcbdb962dfc8bd730e88241fd148d291c82b7a0755f2caa1f2892e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vb5hx" Dec 16 13:14:40.602227 kubelet[2772]: E1216 13:14:40.601534 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vb5hx_kube-system(511ccaed-67b8-4172-ab54-38134ffd645e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vb5hx_kube-system(511ccaed-67b8-4172-ab54-38134ffd645e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d11e399d3fcbdb962dfc8bd730e88241fd148d291c82b7a0755f2caa1f2892e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vb5hx" podUID="511ccaed-67b8-4172-ab54-38134ffd645e" Dec 16 13:14:40.620141 containerd[1515]: time="2025-12-16T13:14:40.620050269Z" level=error msg="Failed to destroy network for sandbox \"ded15fd2d3edc0e6d7f9046822cd4d202cf062d82b1aa554b6e0c664126ced76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.626518 containerd[1515]: time="2025-12-16T13:14:40.624639494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jpx9n,Uid:d899c118-add8-4703-b1b2-47b81276ce81,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ded15fd2d3edc0e6d7f9046822cd4d202cf062d82b1aa554b6e0c664126ced76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.627363 kubelet[2772]: E1216 13:14:40.626859 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ded15fd2d3edc0e6d7f9046822cd4d202cf062d82b1aa554b6e0c664126ced76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.627363 kubelet[2772]: E1216 13:14:40.627011 2772 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ded15fd2d3edc0e6d7f9046822cd4d202cf062d82b1aa554b6e0c664126ced76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jpx9n" Dec 16 13:14:40.627363 kubelet[2772]: E1216 13:14:40.627047 2772 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ded15fd2d3edc0e6d7f9046822cd4d202cf062d82b1aa554b6e0c664126ced76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jpx9n" Dec 16 13:14:40.627599 kubelet[2772]: E1216 13:14:40.627320 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-jpx9n_kube-system(d899c118-add8-4703-b1b2-47b81276ce81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-jpx9n_kube-system(d899c118-add8-4703-b1b2-47b81276ce81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ded15fd2d3edc0e6d7f9046822cd4d202cf062d82b1aa554b6e0c664126ced76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-jpx9n" podUID="d899c118-add8-4703-b1b2-47b81276ce81" Dec 16 13:14:40.680694 containerd[1515]: time="2025-12-16T13:14:40.680603444Z" level=error msg="Failed to destroy network for sandbox \"748deb7035b88a0e4c1bee62be3edf88b296e01cf861c48ba827537948b0a252\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.682902 containerd[1515]: time="2025-12-16T13:14:40.682842769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-864c65cd8b-nvfvd,Uid:f96341ec-7be6-4b46-b044-72e177b23759,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"748deb7035b88a0e4c1bee62be3edf88b296e01cf861c48ba827537948b0a252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.683987 kubelet[2772]: E1216 13:14:40.683799 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"748deb7035b88a0e4c1bee62be3edf88b296e01cf861c48ba827537948b0a252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.685461 kubelet[2772]: E1216 13:14:40.684160 2772 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"748deb7035b88a0e4c1bee62be3edf88b296e01cf861c48ba827537948b0a252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" Dec 16 13:14:40.685461 kubelet[2772]: E1216 13:14:40.684205 2772 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"748deb7035b88a0e4c1bee62be3edf88b296e01cf861c48ba827537948b0a252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" Dec 16 13:14:40.685461 kubelet[2772]: E1216 13:14:40.684292 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-864c65cd8b-nvfvd_calico-apiserver(f96341ec-7be6-4b46-b044-72e177b23759)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-864c65cd8b-nvfvd_calico-apiserver(f96341ec-7be6-4b46-b044-72e177b23759)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"748deb7035b88a0e4c1bee62be3edf88b296e01cf861c48ba827537948b0a252\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" podUID="f96341ec-7be6-4b46-b044-72e177b23759" Dec 16 13:14:40.763598 containerd[1515]: time="2025-12-16T13:14:40.763082007Z" level=error msg="Failed to destroy network for sandbox \"60244844b8ff9ccf817ec41a62a5c17bc6d9c085b3524202b9abd15597f5320a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.766942 containerd[1515]: time="2025-12-16T13:14:40.766152055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569f6b4df6-2lfjk,Uid:92bcad1c-8640-450a-974e-84155204fa1a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60244844b8ff9ccf817ec41a62a5c17bc6d9c085b3524202b9abd15597f5320a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.767761 kubelet[2772]: E1216 13:14:40.767705 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60244844b8ff9ccf817ec41a62a5c17bc6d9c085b3524202b9abd15597f5320a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.768583 kubelet[2772]: E1216 13:14:40.768541 2772 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60244844b8ff9ccf817ec41a62a5c17bc6d9c085b3524202b9abd15597f5320a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" Dec 16 13:14:40.769017 kubelet[2772]: E1216 13:14:40.768773 2772 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60244844b8ff9ccf817ec41a62a5c17bc6d9c085b3524202b9abd15597f5320a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" Dec 16 13:14:40.770003 kubelet[2772]: E1216 13:14:40.769894 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-569f6b4df6-2lfjk_calico-apiserver(92bcad1c-8640-450a-974e-84155204fa1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-569f6b4df6-2lfjk_calico-apiserver(92bcad1c-8640-450a-974e-84155204fa1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60244844b8ff9ccf817ec41a62a5c17bc6d9c085b3524202b9abd15597f5320a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" podUID="92bcad1c-8640-450a-974e-84155204fa1a" Dec 16 13:14:40.775283 containerd[1515]: time="2025-12-16T13:14:40.775214819Z" level=error msg="Failed to destroy network for sandbox \"83c215fccfcacc93e9837bb9dc1a5405d508bb7b87eefe74c5c0993c2f9b3df0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.780864 containerd[1515]: time="2025-12-16T13:14:40.780787390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-864c65cd8b-5wmt7,Uid:b169b293-70bb-47ce-9436-682d8a8e825b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c215fccfcacc93e9837bb9dc1a5405d508bb7b87eefe74c5c0993c2f9b3df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.782309 kubelet[2772]: E1216 13:14:40.782245 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c215fccfcacc93e9837bb9dc1a5405d508bb7b87eefe74c5c0993c2f9b3df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.783814 kubelet[2772]: E1216 13:14:40.782570 2772 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c215fccfcacc93e9837bb9dc1a5405d508bb7b87eefe74c5c0993c2f9b3df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" Dec 16 13:14:40.783814 kubelet[2772]: E1216 13:14:40.782613 2772 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83c215fccfcacc93e9837bb9dc1a5405d508bb7b87eefe74c5c0993c2f9b3df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" Dec 16 13:14:40.783814 kubelet[2772]: E1216 13:14:40.782714 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-864c65cd8b-5wmt7_calico-apiserver(b169b293-70bb-47ce-9436-682d8a8e825b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-864c65cd8b-5wmt7_calico-apiserver(b169b293-70bb-47ce-9436-682d8a8e825b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83c215fccfcacc93e9837bb9dc1a5405d508bb7b87eefe74c5c0993c2f9b3df0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" podUID="b169b293-70bb-47ce-9436-682d8a8e825b" Dec 16 13:14:40.788927 containerd[1515]: time="2025-12-16T13:14:40.788860223Z" level=error msg="Failed to destroy network for sandbox \"4953ca881be461b7c515adaa20bc99e6c3a076a9f503c76ef4e2277a54fa76e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.792219 containerd[1515]: time="2025-12-16T13:14:40.792165929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sjcz4,Uid:c5258db0-e1f3-4670-a2ed-9cdfef209601,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4953ca881be461b7c515adaa20bc99e6c3a076a9f503c76ef4e2277a54fa76e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.792722 kubelet[2772]: E1216 13:14:40.792682 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4953ca881be461b7c515adaa20bc99e6c3a076a9f503c76ef4e2277a54fa76e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.793095 kubelet[2772]: E1216 13:14:40.793008 2772 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4953ca881be461b7c515adaa20bc99e6c3a076a9f503c76ef4e2277a54fa76e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-sjcz4" Dec 16 13:14:40.793200 kubelet[2772]: E1216 13:14:40.793134 2772 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4953ca881be461b7c515adaa20bc99e6c3a076a9f503c76ef4e2277a54fa76e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-sjcz4" Dec 16 13:14:40.793263 kubelet[2772]: E1216 13:14:40.793222 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-sjcz4_calico-system(c5258db0-e1f3-4670-a2ed-9cdfef209601)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-sjcz4_calico-system(c5258db0-e1f3-4670-a2ed-9cdfef209601)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4953ca881be461b7c515adaa20bc99e6c3a076a9f503c76ef4e2277a54fa76e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-sjcz4" podUID="c5258db0-e1f3-4670-a2ed-9cdfef209601" Dec 16 13:14:40.811399 containerd[1515]: time="2025-12-16T13:14:40.811338128Z" level=error msg="Failed to destroy network for sandbox \"f0da8e054450a9c5ddbc1cbab00fc754f6f594c4a34b666c5bf4c1d8cddfbde1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.812877 containerd[1515]: time="2025-12-16T13:14:40.812812009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77dd9f964-7whrl,Uid:9ac13d5a-ae98-4968-bf5e-a2b621311d79,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0da8e054450a9c5ddbc1cbab00fc754f6f594c4a34b666c5bf4c1d8cddfbde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.813481 kubelet[2772]: E1216 13:14:40.813201 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0da8e054450a9c5ddbc1cbab00fc754f6f594c4a34b666c5bf4c1d8cddfbde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.813481 kubelet[2772]: E1216 13:14:40.813260 2772 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0da8e054450a9c5ddbc1cbab00fc754f6f594c4a34b666c5bf4c1d8cddfbde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77dd9f964-7whrl" Dec 16 13:14:40.813481 kubelet[2772]: E1216 13:14:40.813289 2772 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0da8e054450a9c5ddbc1cbab00fc754f6f594c4a34b666c5bf4c1d8cddfbde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77dd9f964-7whrl" Dec 16 13:14:40.813990 kubelet[2772]: E1216 13:14:40.813373 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77dd9f964-7whrl_calico-system(9ac13d5a-ae98-4968-bf5e-a2b621311d79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77dd9f964-7whrl_calico-system(9ac13d5a-ae98-4968-bf5e-a2b621311d79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0da8e054450a9c5ddbc1cbab00fc754f6f594c4a34b666c5bf4c1d8cddfbde1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77dd9f964-7whrl" podUID="9ac13d5a-ae98-4968-bf5e-a2b621311d79" Dec 16 13:14:40.899361 systemd[1]: Created slice kubepods-besteffort-poda7531109_4d4b_4a8c_8928_f52ad25f6b55.slice - libcontainer container kubepods-besteffort-poda7531109_4d4b_4a8c_8928_f52ad25f6b55.slice. Dec 16 13:14:40.906520 containerd[1515]: time="2025-12-16T13:14:40.906470350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z5rm2,Uid:a7531109-4d4b-4a8c-8928-f52ad25f6b55,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:40.985734 containerd[1515]: time="2025-12-16T13:14:40.985621492Z" level=error msg="Failed to destroy network for sandbox \"beab27fb80d778ee6c6aa47d0e7fc87fbc989ea5b08be1094418ad4f16e9fed8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.987872 containerd[1515]: time="2025-12-16T13:14:40.987793238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z5rm2,Uid:a7531109-4d4b-4a8c-8928-f52ad25f6b55,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"beab27fb80d778ee6c6aa47d0e7fc87fbc989ea5b08be1094418ad4f16e9fed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.989092 kubelet[2772]: E1216 13:14:40.989015 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beab27fb80d778ee6c6aa47d0e7fc87fbc989ea5b08be1094418ad4f16e9fed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:14:40.989391 kubelet[2772]: E1216 13:14:40.989102 2772 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beab27fb80d778ee6c6aa47d0e7fc87fbc989ea5b08be1094418ad4f16e9fed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z5rm2" Dec 16 13:14:40.989391 kubelet[2772]: E1216 13:14:40.989136 2772 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beab27fb80d778ee6c6aa47d0e7fc87fbc989ea5b08be1094418ad4f16e9fed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z5rm2" Dec 16 13:14:40.989391 kubelet[2772]: E1216 13:14:40.989210 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z5rm2_calico-system(a7531109-4d4b-4a8c-8928-f52ad25f6b55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z5rm2_calico-system(a7531109-4d4b-4a8c-8928-f52ad25f6b55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"beab27fb80d778ee6c6aa47d0e7fc87fbc989ea5b08be1094418ad4f16e9fed8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:14:41.212146 systemd[1]: run-netns-cni\x2dc4a23470\x2d306e\x2dd909\x2da6f5\x2debd117265573.mount: Deactivated successfully. Dec 16 13:14:41.212298 systemd[1]: run-netns-cni\x2d4b9a0cea\x2da1eb\x2defb5\x2da978\x2d500bb3eda30c.mount: Deactivated successfully. Dec 16 13:14:41.212413 systemd[1]: run-netns-cni\x2dc14197e9\x2d8aea\x2d8b84\x2d09ab\x2dd4bde85fdfa5.mount: Deactivated successfully. Dec 16 13:14:41.212515 systemd[1]: run-netns-cni\x2d8aeb875b\x2de9bb\x2d9bcc\x2d0f7d\x2d83b5d902f3cf.mount: Deactivated successfully. Dec 16 13:14:47.263195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628956150.mount: Deactivated successfully. Dec 16 13:14:47.293154 containerd[1515]: time="2025-12-16T13:14:47.292892422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:47.294560 containerd[1515]: time="2025-12-16T13:14:47.294502467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 13:14:47.297742 containerd[1515]: time="2025-12-16T13:14:47.296819976Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:47.299393 containerd[1515]: time="2025-12-16T13:14:47.299344050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:47.300301 containerd[1515]: time="2025-12-16T13:14:47.300260189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.020402459s" Dec 16 13:14:47.300454 containerd[1515]: time="2025-12-16T13:14:47.300429992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:14:47.332735 containerd[1515]: time="2025-12-16T13:14:47.332698504Z" level=info msg="CreateContainer within sandbox \"c4838ba2730a2bcf4b1c392b5d98d68e510d72e0ad241d308d59a1d9618abe21\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:14:47.345420 containerd[1515]: time="2025-12-16T13:14:47.345367922Z" level=info msg="Container e172bd11ffddc6a523839621b77bdf9835c97af5cbd21a4a80623f300c8601a7: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:47.357925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223825959.mount: Deactivated successfully. Dec 16 13:14:47.370767 containerd[1515]: time="2025-12-16T13:14:47.370712410Z" level=info msg="CreateContainer within sandbox \"c4838ba2730a2bcf4b1c392b5d98d68e510d72e0ad241d308d59a1d9618abe21\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e172bd11ffddc6a523839621b77bdf9835c97af5cbd21a4a80623f300c8601a7\"" Dec 16 13:14:47.372322 containerd[1515]: time="2025-12-16T13:14:47.372257078Z" level=info msg="StartContainer for \"e172bd11ffddc6a523839621b77bdf9835c97af5cbd21a4a80623f300c8601a7\"" Dec 16 13:14:47.378470 containerd[1515]: time="2025-12-16T13:14:47.378419328Z" level=info msg="connecting to shim e172bd11ffddc6a523839621b77bdf9835c97af5cbd21a4a80623f300c8601a7" address="unix:///run/containerd/s/e4d5d701ddcd7070ab7298e1335ff60b64ed3a5eedbd9f2ddb3e8249278f9ac2" protocol=ttrpc version=3 Dec 16 13:14:47.414145 systemd[1]: Started cri-containerd-e172bd11ffddc6a523839621b77bdf9835c97af5cbd21a4a80623f300c8601a7.scope - libcontainer container e172bd11ffddc6a523839621b77bdf9835c97af5cbd21a4a80623f300c8601a7. Dec 16 13:14:47.542436 containerd[1515]: time="2025-12-16T13:14:47.542299899Z" level=info msg="StartContainer for \"e172bd11ffddc6a523839621b77bdf9835c97af5cbd21a4a80623f300c8601a7\" returns successfully" Dec 16 13:14:47.669619 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:14:47.669787 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:14:47.980491 kubelet[2772]: I1216 13:14:47.980260 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfr5d\" (UniqueName: \"kubernetes.io/projected/9ac13d5a-ae98-4968-bf5e-a2b621311d79-kube-api-access-mfr5d\") pod \"9ac13d5a-ae98-4968-bf5e-a2b621311d79\" (UID: \"9ac13d5a-ae98-4968-bf5e-a2b621311d79\") " Dec 16 13:14:47.983605 kubelet[2772]: I1216 13:14:47.980625 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ac13d5a-ae98-4968-bf5e-a2b621311d79-whisker-backend-key-pair\") pod \"9ac13d5a-ae98-4968-bf5e-a2b621311d79\" (UID: \"9ac13d5a-ae98-4968-bf5e-a2b621311d79\") " Dec 16 13:14:47.983930 kubelet[2772]: I1216 13:14:47.983822 2772 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac13d5a-ae98-4968-bf5e-a2b621311d79-whisker-ca-bundle\") pod \"9ac13d5a-ae98-4968-bf5e-a2b621311d79\" (UID: \"9ac13d5a-ae98-4968-bf5e-a2b621311d79\") " Dec 16 13:14:47.986056 kubelet[2772]: I1216 13:14:47.985451 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac13d5a-ae98-4968-bf5e-a2b621311d79-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9ac13d5a-ae98-4968-bf5e-a2b621311d79" (UID: "9ac13d5a-ae98-4968-bf5e-a2b621311d79"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:14:47.991708 kubelet[2772]: I1216 13:14:47.991490 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac13d5a-ae98-4968-bf5e-a2b621311d79-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9ac13d5a-ae98-4968-bf5e-a2b621311d79" (UID: "9ac13d5a-ae98-4968-bf5e-a2b621311d79"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:14:47.992474 kubelet[2772]: I1216 13:14:47.992416 2772 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac13d5a-ae98-4968-bf5e-a2b621311d79-kube-api-access-mfr5d" (OuterVolumeSpecName: "kube-api-access-mfr5d") pod "9ac13d5a-ae98-4968-bf5e-a2b621311d79" (UID: "9ac13d5a-ae98-4968-bf5e-a2b621311d79"). InnerVolumeSpecName "kube-api-access-mfr5d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:14:48.086483 kubelet[2772]: I1216 13:14:48.085796 2772 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac13d5a-ae98-4968-bf5e-a2b621311d79-whisker-ca-bundle\") on node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:14:48.086483 kubelet[2772]: I1216 13:14:48.085875 2772 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfr5d\" (UniqueName: \"kubernetes.io/projected/9ac13d5a-ae98-4968-bf5e-a2b621311d79-kube-api-access-mfr5d\") on node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:14:48.086805 kubelet[2772]: I1216 13:14:48.085893 2772 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ac13d5a-ae98-4968-bf5e-a2b621311d79-whisker-backend-key-pair\") on node \"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 13:14:48.262538 systemd[1]: var-lib-kubelet-pods-9ac13d5a\x2dae98\x2d4968\x2dbf5e\x2da2b621311d79-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmfr5d.mount: Deactivated successfully. Dec 16 13:14:48.262934 systemd[1]: var-lib-kubelet-pods-9ac13d5a\x2dae98\x2d4968\x2dbf5e\x2da2b621311d79-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:14:48.314790 systemd[1]: Removed slice kubepods-besteffort-pod9ac13d5a_ae98_4968_bf5e_a2b621311d79.slice - libcontainer container kubepods-besteffort-pod9ac13d5a_ae98_4968_bf5e_a2b621311d79.slice. Dec 16 13:14:48.335789 kubelet[2772]: I1216 13:14:48.334863 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f5pl7" podStartSLOduration=2.258146073 podStartE2EDuration="18.334838015s" podCreationTimestamp="2025-12-16 13:14:30 +0000 UTC" firstStartedPulling="2025-12-16 13:14:31.224830318 +0000 UTC m=+22.600160940" lastFinishedPulling="2025-12-16 13:14:47.301522284 +0000 UTC m=+38.676852882" observedRunningTime="2025-12-16 13:14:48.333271062 +0000 UTC m=+39.708601718" watchObservedRunningTime="2025-12-16 13:14:48.334838015 +0000 UTC m=+39.710168633" Dec 16 13:14:48.421548 systemd[1]: Created slice kubepods-besteffort-pod4495a83f_d804_4460_ba96_c4bb7a122932.slice - libcontainer container kubepods-besteffort-pod4495a83f_d804_4460_ba96_c4bb7a122932.slice. Dec 16 13:14:48.590488 kubelet[2772]: I1216 13:14:48.590220 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4495a83f-d804-4460-ba96-c4bb7a122932-whisker-ca-bundle\") pod \"whisker-69dccbfc8-4ng2l\" (UID: \"4495a83f-d804-4460-ba96-c4bb7a122932\") " pod="calico-system/whisker-69dccbfc8-4ng2l" Dec 16 13:14:48.590488 kubelet[2772]: I1216 13:14:48.590297 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4495a83f-d804-4460-ba96-c4bb7a122932-whisker-backend-key-pair\") pod \"whisker-69dccbfc8-4ng2l\" (UID: \"4495a83f-d804-4460-ba96-c4bb7a122932\") " pod="calico-system/whisker-69dccbfc8-4ng2l" Dec 16 13:14:48.590488 kubelet[2772]: I1216 13:14:48.590326 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sp6x\" (UniqueName: \"kubernetes.io/projected/4495a83f-d804-4460-ba96-c4bb7a122932-kube-api-access-6sp6x\") pod \"whisker-69dccbfc8-4ng2l\" (UID: \"4495a83f-d804-4460-ba96-c4bb7a122932\") " pod="calico-system/whisker-69dccbfc8-4ng2l" Dec 16 13:14:48.732791 containerd[1515]: time="2025-12-16T13:14:48.732734016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69dccbfc8-4ng2l,Uid:4495a83f-d804-4460-ba96-c4bb7a122932,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:48.887118 systemd-networkd[1412]: caliada50a011f7: Link UP Dec 16 13:14:48.887447 systemd-networkd[1412]: caliada50a011f7: Gained carrier Dec 16 13:14:48.906470 kubelet[2772]: I1216 13:14:48.906426 2772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac13d5a-ae98-4968-bf5e-a2b621311d79" path="/var/lib/kubelet/pods/9ac13d5a-ae98-4968-bf5e-a2b621311d79/volumes" Dec 16 13:14:48.914559 containerd[1515]: 2025-12-16 13:14:48.770 [INFO][3819] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:14:48.914559 containerd[1515]: 2025-12-16 13:14:48.787 [INFO][3819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0 whisker-69dccbfc8- calico-system 4495a83f-d804-4460-ba96-c4bb7a122932 894 0 2025-12-16 13:14:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69dccbfc8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal whisker-69dccbfc8-4ng2l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliada50a011f7 [] [] }} ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Namespace="calico-system" Pod="whisker-69dccbfc8-4ng2l" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-" Dec 16 13:14:48.914559 containerd[1515]: 2025-12-16 13:14:48.787 [INFO][3819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Namespace="calico-system" Pod="whisker-69dccbfc8-4ng2l" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0" Dec 16 13:14:48.914559 containerd[1515]: 2025-12-16 13:14:48.826 [INFO][3830] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" HandleID="k8s-pod-network.eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0" Dec 16 13:14:48.914971 containerd[1515]: 2025-12-16 13:14:48.826 [INFO][3830] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" HandleID="k8s-pod-network.eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", "pod":"whisker-69dccbfc8-4ng2l", "timestamp":"2025-12-16 13:14:48.826103568 +0000 UTC"}, Hostname:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:14:48.914971 containerd[1515]: 2025-12-16 13:14:48.826 [INFO][3830] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:14:48.914971 containerd[1515]: 2025-12-16 13:14:48.826 [INFO][3830] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:14:48.914971 containerd[1515]: 2025-12-16 13:14:48.826 [INFO][3830] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal' Dec 16 13:14:48.914971 containerd[1515]: 2025-12-16 13:14:48.835 [INFO][3830] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:48.914971 containerd[1515]: 2025-12-16 13:14:48.841 [INFO][3830] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:48.914971 containerd[1515]: 2025-12-16 13:14:48.850 [INFO][3830] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:48.914971 containerd[1515]: 2025-12-16 13:14:48.852 [INFO][3830] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:48.915691 containerd[1515]: 2025-12-16 13:14:48.855 [INFO][3830] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:48.915691 containerd[1515]: 2025-12-16 13:14:48.856 [INFO][3830] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:48.915691 containerd[1515]: 2025-12-16 13:14:48.857 [INFO][3830] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4 Dec 16 13:14:48.915691 containerd[1515]: 2025-12-16 13:14:48.863 [INFO][3830] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:48.915691 containerd[1515]: 2025-12-16 13:14:48.871 [INFO][3830] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.129/26] block=192.168.105.128/26 handle="k8s-pod-network.eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:48.915691 containerd[1515]: 2025-12-16 13:14:48.871 [INFO][3830] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.129/26] handle="k8s-pod-network.eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:48.915691 containerd[1515]: 2025-12-16 13:14:48.871 [INFO][3830] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:14:48.915691 containerd[1515]: 2025-12-16 13:14:48.871 [INFO][3830] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.129/26] IPv6=[] ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" HandleID="k8s-pod-network.eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0" Dec 16 13:14:48.916298 containerd[1515]: 2025-12-16 13:14:48.875 [INFO][3819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Namespace="calico-system" Pod="whisker-69dccbfc8-4ng2l" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0", GenerateName:"whisker-69dccbfc8-", Namespace:"calico-system", SelfLink:"", UID:"4495a83f-d804-4460-ba96-c4bb7a122932", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69dccbfc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-69dccbfc8-4ng2l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliada50a011f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:48.916476 containerd[1515]: 2025-12-16 13:14:48.876 [INFO][3819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.129/32] ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Namespace="calico-system" Pod="whisker-69dccbfc8-4ng2l" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0" Dec 16 13:14:48.916476 containerd[1515]: 2025-12-16 13:14:48.876 [INFO][3819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliada50a011f7 ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Namespace="calico-system" Pod="whisker-69dccbfc8-4ng2l" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0" Dec 16 13:14:48.916476 containerd[1515]: 2025-12-16 13:14:48.886 [INFO][3819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Namespace="calico-system" Pod="whisker-69dccbfc8-4ng2l" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0" Dec 16 13:14:48.916712 containerd[1515]: 2025-12-16 13:14:48.888 [INFO][3819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Namespace="calico-system" Pod="whisker-69dccbfc8-4ng2l" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0", GenerateName:"whisker-69dccbfc8-", Namespace:"calico-system", SelfLink:"", UID:"4495a83f-d804-4460-ba96-c4bb7a122932", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69dccbfc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4", Pod:"whisker-69dccbfc8-4ng2l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliada50a011f7", MAC:"5a:84:29:b6:21:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:48.916991 containerd[1515]: 2025-12-16 13:14:48.912 [INFO][3819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" Namespace="calico-system" Pod="whisker-69dccbfc8-4ng2l" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-whisker--69dccbfc8--4ng2l-eth0" Dec 16 13:14:48.947687 containerd[1515]: time="2025-12-16T13:14:48.947610470Z" level=info msg="connecting to shim eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4" address="unix:///run/containerd/s/dddccedf7546110a7dadacaeac6f6a37d9046a1f707bc569bd9f1ee244b6201f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:48.980165 systemd[1]: Started cri-containerd-eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4.scope - libcontainer container eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4. Dec 16 13:14:49.069187 containerd[1515]: time="2025-12-16T13:14:49.069049945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69dccbfc8-4ng2l,Uid:4495a83f-d804-4460-ba96-c4bb7a122932,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb5689da2759c199c6e1f5e36caf5c39e816d468928eb7720879d1c82fce9fb4\"" Dec 16 13:14:49.073324 containerd[1515]: time="2025-12-16T13:14:49.073155862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:14:49.253212 containerd[1515]: time="2025-12-16T13:14:49.253041921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:14:49.254746 containerd[1515]: time="2025-12-16T13:14:49.254562084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:14:49.256246 containerd[1515]: time="2025-12-16T13:14:49.255988720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:14:49.256447 kubelet[2772]: E1216 13:14:49.256391 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:14:49.260006 kubelet[2772]: E1216 13:14:49.256470 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:14:49.260006 kubelet[2772]: E1216 13:14:49.256592 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-69dccbfc8-4ng2l_calico-system(4495a83f-d804-4460-ba96-c4bb7a122932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:14:49.262018 containerd[1515]: time="2025-12-16T13:14:49.258501037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:14:49.425092 containerd[1515]: time="2025-12-16T13:14:49.425016365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:14:49.426649 containerd[1515]: time="2025-12-16T13:14:49.426495336Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:14:49.426649 containerd[1515]: time="2025-12-16T13:14:49.426543879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:14:49.427479 kubelet[2772]: E1216 13:14:49.427333 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:14:49.427839 kubelet[2772]: E1216 13:14:49.427693 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:14:49.430180 kubelet[2772]: E1216 13:14:49.430144 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-69dccbfc8-4ng2l_calico-system(4495a83f-d804-4460-ba96-c4bb7a122932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:14:49.430482 kubelet[2772]: E1216 13:14:49.430431 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69dccbfc8-4ng2l" podUID="4495a83f-d804-4460-ba96-c4bb7a122932" Dec 16 13:14:50.325964 kubelet[2772]: E1216 13:14:50.325768 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69dccbfc8-4ng2l" podUID="4495a83f-d804-4460-ba96-c4bb7a122932" Dec 16 13:14:50.635119 systemd-networkd[1412]: caliada50a011f7: Gained IPv6LL Dec 16 13:14:50.779890 kubelet[2772]: I1216 13:14:50.779841 2772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:14:51.896025 containerd[1515]: time="2025-12-16T13:14:51.895382005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z5rm2,Uid:a7531109-4d4b-4a8c-8928-f52ad25f6b55,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:52.097221 systemd-networkd[1412]: cali0d689860c77: Link UP Dec 16 13:14:52.098513 systemd-networkd[1412]: cali0d689860c77: Gained carrier Dec 16 13:14:52.158964 containerd[1515]: 2025-12-16 13:14:51.948 [INFO][4029] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:14:52.158964 containerd[1515]: 2025-12-16 13:14:51.974 [INFO][4029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0 csi-node-driver- calico-system a7531109-4d4b-4a8c-8928-f52ad25f6b55 711 0 2025-12-16 13:14:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal csi-node-driver-z5rm2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0d689860c77 [] [] }} ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Namespace="calico-system" Pod="csi-node-driver-z5rm2" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-" Dec 16 13:14:52.158964 containerd[1515]: 2025-12-16 13:14:51.975 [INFO][4029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Namespace="calico-system" Pod="csi-node-driver-z5rm2" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0" Dec 16 13:14:52.158964 containerd[1515]: 2025-12-16 13:14:52.022 [INFO][4043] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" HandleID="k8s-pod-network.8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0" Dec 16 13:14:52.159352 containerd[1515]: 2025-12-16 13:14:52.023 [INFO][4043] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" HandleID="k8s-pod-network.8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", "pod":"csi-node-driver-z5rm2", "timestamp":"2025-12-16 13:14:52.022704798 +0000 UTC"}, Hostname:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:14:52.159352 containerd[1515]: 2025-12-16 13:14:52.023 [INFO][4043] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:14:52.159352 containerd[1515]: 2025-12-16 13:14:52.023 [INFO][4043] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:14:52.159352 containerd[1515]: 2025-12-16 13:14:52.023 [INFO][4043] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal' Dec 16 13:14:52.159352 containerd[1515]: 2025-12-16 13:14:52.034 [INFO][4043] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:52.159352 containerd[1515]: 2025-12-16 13:14:52.041 [INFO][4043] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:52.159352 containerd[1515]: 2025-12-16 13:14:52.051 [INFO][4043] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:52.159352 containerd[1515]: 2025-12-16 13:14:52.054 [INFO][4043] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:52.159747 containerd[1515]: 2025-12-16 13:14:52.058 [INFO][4043] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:52.159747 containerd[1515]: 2025-12-16 13:14:52.058 [INFO][4043] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:52.159747 containerd[1515]: 2025-12-16 13:14:52.061 [INFO][4043] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212 Dec 16 13:14:52.159747 containerd[1515]: 2025-12-16 13:14:52.067 [INFO][4043] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:52.159747 containerd[1515]: 2025-12-16 13:14:52.084 [INFO][4043] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.130/26] block=192.168.105.128/26 handle="k8s-pod-network.8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:52.159747 containerd[1515]: 2025-12-16 13:14:52.086 [INFO][4043] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.130/26] handle="k8s-pod-network.8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:52.159747 containerd[1515]: 2025-12-16 13:14:52.086 [INFO][4043] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:14:52.159747 containerd[1515]: 2025-12-16 13:14:52.086 [INFO][4043] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.130/26] IPv6=[] ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" HandleID="k8s-pod-network.8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0" Dec 16 13:14:52.161221 containerd[1515]: 2025-12-16 13:14:52.090 [INFO][4029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Namespace="calico-system" Pod="csi-node-driver-z5rm2" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7531109-4d4b-4a8c-8928-f52ad25f6b55", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-z5rm2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d689860c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:52.161377 containerd[1515]: 2025-12-16 13:14:52.090 [INFO][4029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.130/32] ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Namespace="calico-system" Pod="csi-node-driver-z5rm2" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0" Dec 16 13:14:52.161377 containerd[1515]: 2025-12-16 13:14:52.090 [INFO][4029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d689860c77 ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Namespace="calico-system" Pod="csi-node-driver-z5rm2" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0" Dec 16 13:14:52.161377 containerd[1515]: 2025-12-16 13:14:52.098 [INFO][4029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Namespace="calico-system" Pod="csi-node-driver-z5rm2" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0" Dec 16 13:14:52.161520 containerd[1515]: 2025-12-16 13:14:52.099 [INFO][4029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Namespace="calico-system" Pod="csi-node-driver-z5rm2" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7531109-4d4b-4a8c-8928-f52ad25f6b55", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212", Pod:"csi-node-driver-z5rm2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d689860c77", MAC:"26:1f:2d:8f:da:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:52.161728 containerd[1515]: 2025-12-16 13:14:52.152 [INFO][4029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" Namespace="calico-system" Pod="csi-node-driver-z5rm2" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-csi--node--driver--z5rm2-eth0" Dec 16 13:14:52.216938 containerd[1515]: time="2025-12-16T13:14:52.215118606Z" level=info msg="connecting to shim 8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212" address="unix:///run/containerd/s/efda068562460e558fc6158bf72e1a0d675b810da41fa07df397598b2d3cf0a5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:52.287155 systemd[1]: Started cri-containerd-8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212.scope - libcontainer container 8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212. Dec 16 13:14:52.383155 containerd[1515]: time="2025-12-16T13:14:52.383099474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z5rm2,Uid:a7531109-4d4b-4a8c-8928-f52ad25f6b55,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f2d146d4779419a09836acdf0081adae4ffb012bf891c81fec8d65875b3c212\"" Dec 16 13:14:52.387964 containerd[1515]: time="2025-12-16T13:14:52.387832975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:14:52.575746 containerd[1515]: time="2025-12-16T13:14:52.574314864Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:14:52.575746 containerd[1515]: time="2025-12-16T13:14:52.575695547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:14:52.575963 containerd[1515]: time="2025-12-16T13:14:52.575806207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:14:52.577230 kubelet[2772]: E1216 13:14:52.577109 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:14:52.577230 kubelet[2772]: E1216 13:14:52.577173 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:14:52.578415 kubelet[2772]: E1216 13:14:52.577938 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-z5rm2_calico-system(a7531109-4d4b-4a8c-8928-f52ad25f6b55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:14:52.579547 containerd[1515]: time="2025-12-16T13:14:52.579501495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:14:52.743605 containerd[1515]: time="2025-12-16T13:14:52.743545590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:14:52.745376 containerd[1515]: time="2025-12-16T13:14:52.745290156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:14:52.745641 containerd[1515]: time="2025-12-16T13:14:52.745444942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:14:52.745794 kubelet[2772]: E1216 13:14:52.745711 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:14:52.747346 kubelet[2772]: E1216 13:14:52.745814 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:14:52.747503 kubelet[2772]: E1216 13:14:52.747339 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-z5rm2_calico-system(a7531109-4d4b-4a8c-8928-f52ad25f6b55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:14:52.747503 kubelet[2772]: E1216 13:14:52.747415 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:14:52.781791 systemd-networkd[1412]: vxlan.calico: Link UP Dec 16 13:14:52.781803 systemd-networkd[1412]: vxlan.calico: Gained carrier Dec 16 13:14:52.892993 containerd[1515]: time="2025-12-16T13:14:52.892935632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-864c65cd8b-5wmt7,Uid:b169b293-70bb-47ce-9436-682d8a8e825b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:14:52.896990 containerd[1515]: time="2025-12-16T13:14:52.895617508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-864c65cd8b-nvfvd,Uid:f96341ec-7be6-4b46-b044-72e177b23759,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:14:53.210110 systemd-networkd[1412]: calibeff0fb078f: Link UP Dec 16 13:14:53.212260 systemd-networkd[1412]: calibeff0fb078f: Gained carrier Dec 16 13:14:53.238948 containerd[1515]: 2025-12-16 13:14:53.043 [INFO][4169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0 calico-apiserver-864c65cd8b- calico-apiserver b169b293-70bb-47ce-9436-682d8a8e825b 827 0 2025-12-16 13:14:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:864c65cd8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal calico-apiserver-864c65cd8b-5wmt7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibeff0fb078f [] [] }} ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-5wmt7" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-" Dec 16 13:14:53.238948 containerd[1515]: 2025-12-16 13:14:53.045 [INFO][4169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-5wmt7" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0" Dec 16 13:14:53.238948 containerd[1515]: 2025-12-16 13:14:53.126 [INFO][4193] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" HandleID="k8s-pod-network.21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0" Dec 16 13:14:53.239838 containerd[1515]: 2025-12-16 13:14:53.128 [INFO][4193] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" HandleID="k8s-pod-network.21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", "pod":"calico-apiserver-864c65cd8b-5wmt7", "timestamp":"2025-12-16 13:14:53.126108582 +0000 UTC"}, Hostname:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:14:53.239838 containerd[1515]: 2025-12-16 13:14:53.128 [INFO][4193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:14:53.239838 containerd[1515]: 2025-12-16 13:14:53.128 [INFO][4193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:14:53.239838 containerd[1515]: 2025-12-16 13:14:53.128 [INFO][4193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal' Dec 16 13:14:53.239838 containerd[1515]: 2025-12-16 13:14:53.149 [INFO][4193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.239838 containerd[1515]: 2025-12-16 13:14:53.157 [INFO][4193] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.239838 containerd[1515]: 2025-12-16 13:14:53.165 [INFO][4193] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.239838 containerd[1515]: 2025-12-16 13:14:53.168 [INFO][4193] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.240507 containerd[1515]: 2025-12-16 13:14:53.171 [INFO][4193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.240507 containerd[1515]: 2025-12-16 13:14:53.171 [INFO][4193] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.240507 containerd[1515]: 2025-12-16 13:14:53.173 [INFO][4193] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a Dec 16 13:14:53.240507 containerd[1515]: 2025-12-16 13:14:53.182 [INFO][4193] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.240507 containerd[1515]: 2025-12-16 13:14:53.193 [INFO][4193] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.131/26] block=192.168.105.128/26 handle="k8s-pod-network.21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.240507 containerd[1515]: 2025-12-16 13:14:53.193 [INFO][4193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.131/26] handle="k8s-pod-network.21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.240507 containerd[1515]: 2025-12-16 13:14:53.193 [INFO][4193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:14:53.240507 containerd[1515]: 2025-12-16 13:14:53.194 [INFO][4193] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.131/26] IPv6=[] ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" HandleID="k8s-pod-network.21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0" Dec 16 13:14:53.240878 containerd[1515]: 2025-12-16 13:14:53.198 [INFO][4169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-5wmt7" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0", GenerateName:"calico-apiserver-864c65cd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b169b293-70bb-47ce-9436-682d8a8e825b", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"864c65cd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-864c65cd8b-5wmt7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeff0fb078f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:53.241787 containerd[1515]: 2025-12-16 13:14:53.198 [INFO][4169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.131/32] ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-5wmt7" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0" Dec 16 13:14:53.241787 containerd[1515]: 2025-12-16 13:14:53.199 [INFO][4169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeff0fb078f ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-5wmt7" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0" Dec 16 13:14:53.241787 containerd[1515]: 2025-12-16 13:14:53.215 [INFO][4169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-5wmt7" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0" Dec 16 13:14:53.242106 containerd[1515]: 2025-12-16 13:14:53.215 [INFO][4169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-5wmt7" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0", GenerateName:"calico-apiserver-864c65cd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b169b293-70bb-47ce-9436-682d8a8e825b", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"864c65cd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a", Pod:"calico-apiserver-864c65cd8b-5wmt7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeff0fb078f", MAC:"5a:33:e3:46:aa:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:53.242106 containerd[1515]: 2025-12-16 13:14:53.236 [INFO][4169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-5wmt7" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--5wmt7-eth0" Dec 16 13:14:53.298162 containerd[1515]: time="2025-12-16T13:14:53.298041085Z" level=info msg="connecting to shim 21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a" address="unix:///run/containerd/s/f1cb4f4aeda030ca0d1edf010617cc7b583d583e3ab64abdbd71e49d85f45bd1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:53.359329 kubelet[2772]: E1216 13:14:53.358731 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:14:53.370233 systemd-networkd[1412]: calif03d4a8171e: Link UP Dec 16 13:14:53.370608 systemd-networkd[1412]: calif03d4a8171e: Gained carrier Dec 16 13:14:53.392358 systemd[1]: Started cri-containerd-21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a.scope - libcontainer container 21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a. Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.059 [INFO][4174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0 calico-apiserver-864c65cd8b- calico-apiserver f96341ec-7be6-4b46-b044-72e177b23759 823 0 2025-12-16 13:14:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:864c65cd8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal calico-apiserver-864c65cd8b-nvfvd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif03d4a8171e [] [] }} ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-nvfvd" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.060 [INFO][4174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-nvfvd" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.151 [INFO][4198] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" HandleID="k8s-pod-network.0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.151 [INFO][4198] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" HandleID="k8s-pod-network.0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", "pod":"calico-apiserver-864c65cd8b-nvfvd", "timestamp":"2025-12-16 13:14:53.151222387 +0000 UTC"}, Hostname:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.152 [INFO][4198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.193 [INFO][4198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.194 [INFO][4198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal' Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.253 [INFO][4198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.266 [INFO][4198] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.286 [INFO][4198] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.290 [INFO][4198] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.299 [INFO][4198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.299 [INFO][4198] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.302 [INFO][4198] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.313 [INFO][4198] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.341 [INFO][4198] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.132/26] block=192.168.105.128/26 handle="k8s-pod-network.0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.341 [INFO][4198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.132/26] handle="k8s-pod-network.0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.343 [INFO][4198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:14:53.424084 containerd[1515]: 2025-12-16 13:14:53.344 [INFO][4198] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.132/26] IPv6=[] ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" HandleID="k8s-pod-network.0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0" Dec 16 13:14:53.427687 containerd[1515]: 2025-12-16 13:14:53.350 [INFO][4174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-nvfvd" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0", GenerateName:"calico-apiserver-864c65cd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f96341ec-7be6-4b46-b044-72e177b23759", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"864c65cd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-864c65cd8b-nvfvd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif03d4a8171e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:53.427687 containerd[1515]: 2025-12-16 13:14:53.350 [INFO][4174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.132/32] ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-nvfvd" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0" Dec 16 13:14:53.427687 containerd[1515]: 2025-12-16 13:14:53.350 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif03d4a8171e ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-nvfvd" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0" Dec 16 13:14:53.427687 containerd[1515]: 2025-12-16 13:14:53.369 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-nvfvd" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0" Dec 16 13:14:53.427687 containerd[1515]: 2025-12-16 13:14:53.372 [INFO][4174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-nvfvd" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0", GenerateName:"calico-apiserver-864c65cd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f96341ec-7be6-4b46-b044-72e177b23759", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"864c65cd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea", Pod:"calico-apiserver-864c65cd8b-nvfvd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif03d4a8171e", MAC:"12:5c:ad:b2:13:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:53.427687 containerd[1515]: 2025-12-16 13:14:53.411 [INFO][4174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" Namespace="calico-apiserver" Pod="calico-apiserver-864c65cd8b-nvfvd" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--864c65cd8b--nvfvd-eth0" Dec 16 13:14:53.458083 systemd-networkd[1412]: cali0d689860c77: Gained IPv6LL Dec 16 13:14:53.482191 containerd[1515]: time="2025-12-16T13:14:53.481612351Z" level=info msg="connecting to shim 0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea" address="unix:///run/containerd/s/92637e608b29ac2aca6a82ba6d9e8348bf88427c213f688d2901ac1111593d71" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:53.553386 systemd[1]: Started cri-containerd-0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea.scope - libcontainer container 0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea. Dec 16 13:14:53.603900 containerd[1515]: time="2025-12-16T13:14:53.603838963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-864c65cd8b-5wmt7,Uid:b169b293-70bb-47ce-9436-682d8a8e825b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"21d3b5e64ef70efed15e625673f8fb57a0b16c80eb76aa523eefa1ec6861744a\"" Dec 16 13:14:53.607655 containerd[1515]: time="2025-12-16T13:14:53.607538908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:14:53.707854 containerd[1515]: time="2025-12-16T13:14:53.707429484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-864c65cd8b-nvfvd,Uid:f96341ec-7be6-4b46-b044-72e177b23759,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0f2d112434dae7cc5597f748114ea5ce7158adc115fef7260e754ab9a29710ea\"" Dec 16 13:14:53.774444 containerd[1515]: time="2025-12-16T13:14:53.773756500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:14:53.776096 containerd[1515]: time="2025-12-16T13:14:53.776030274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:14:53.776338 containerd[1515]: time="2025-12-16T13:14:53.776047450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:14:53.776490 kubelet[2772]: E1216 13:14:53.776427 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:14:53.778185 kubelet[2772]: E1216 13:14:53.776508 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:14:53.778185 kubelet[2772]: E1216 13:14:53.776778 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-864c65cd8b-5wmt7_calico-apiserver(b169b293-70bb-47ce-9436-682d8a8e825b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:14:53.778185 kubelet[2772]: E1216 13:14:53.776833 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" podUID="b169b293-70bb-47ce-9436-682d8a8e825b" Dec 16 13:14:53.778520 containerd[1515]: time="2025-12-16T13:14:53.777221110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:14:53.943923 containerd[1515]: time="2025-12-16T13:14:53.943827084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:14:53.945414 containerd[1515]: time="2025-12-16T13:14:53.945356405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:14:53.945549 containerd[1515]: time="2025-12-16T13:14:53.945472455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:14:53.945847 kubelet[2772]: E1216 13:14:53.945796 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:14:53.945987 kubelet[2772]: E1216 13:14:53.945863 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:14:53.946047 kubelet[2772]: E1216 13:14:53.946009 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-864c65cd8b-nvfvd_calico-apiserver(f96341ec-7be6-4b46-b044-72e177b23759): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:14:53.946119 kubelet[2772]: E1216 13:14:53.946067 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" podUID="f96341ec-7be6-4b46-b044-72e177b23759" Dec 16 13:14:53.963192 systemd-networkd[1412]: vxlan.calico: Gained IPv6LL Dec 16 13:14:54.360738 kubelet[2772]: E1216 13:14:54.360658 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" podUID="f96341ec-7be6-4b46-b044-72e177b23759" Dec 16 13:14:54.369073 kubelet[2772]: E1216 13:14:54.369019 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" podUID="b169b293-70bb-47ce-9436-682d8a8e825b" Dec 16 13:14:54.369816 kubelet[2772]: E1216 13:14:54.369648 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:14:54.892662 containerd[1515]: time="2025-12-16T13:14:54.892162926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vb5hx,Uid:511ccaed-67b8-4172-ab54-38134ffd645e,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:54.894664 containerd[1515]: time="2025-12-16T13:14:54.894575408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jpx9n,Uid:d899c118-add8-4703-b1b2-47b81276ce81,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:54.924083 systemd-networkd[1412]: calibeff0fb078f: Gained IPv6LL Dec 16 13:14:55.120064 systemd-networkd[1412]: calicc228c22fe3: Link UP Dec 16 13:14:55.125513 systemd-networkd[1412]: calicc228c22fe3: Gained carrier Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:54.979 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0 coredns-66bc5c9577- kube-system 511ccaed-67b8-4172-ab54-38134ffd645e 819 0 2025-12-16 13:14:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal coredns-66bc5c9577-vb5hx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc228c22fe3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Namespace="kube-system" Pod="coredns-66bc5c9577-vb5hx" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:54.979 [INFO][4355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Namespace="kube-system" Pod="coredns-66bc5c9577-vb5hx" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.038 [INFO][4381] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" HandleID="k8s-pod-network.f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.040 [INFO][4381] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" HandleID="k8s-pod-network.f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5ea0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", "pod":"coredns-66bc5c9577-vb5hx", "timestamp":"2025-12-16 13:14:55.038783813 +0000 UTC"}, Hostname:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.040 [INFO][4381] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.040 [INFO][4381] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.040 [INFO][4381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal' Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.056 [INFO][4381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.065 [INFO][4381] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.070 [INFO][4381] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.073 [INFO][4381] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.077 [INFO][4381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.077 [INFO][4381] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.080 [INFO][4381] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3 Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.088 [INFO][4381] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.098 [INFO][4381] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.133/26] block=192.168.105.128/26 handle="k8s-pod-network.f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.098 [INFO][4381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.133/26] handle="k8s-pod-network.f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.098 [INFO][4381] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:14:55.159860 containerd[1515]: 2025-12-16 13:14:55.098 [INFO][4381] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.133/26] IPv6=[] ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" HandleID="k8s-pod-network.f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0" Dec 16 13:14:55.164174 containerd[1515]: 2025-12-16 13:14:55.103 [INFO][4355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Namespace="kube-system" Pod="coredns-66bc5c9577-vb5hx" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"511ccaed-67b8-4172-ab54-38134ffd645e", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-66bc5c9577-vb5hx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc228c22fe3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:55.164174 containerd[1515]: 2025-12-16 13:14:55.104 [INFO][4355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.133/32] ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Namespace="kube-system" Pod="coredns-66bc5c9577-vb5hx" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0" Dec 16 13:14:55.164174 containerd[1515]: 2025-12-16 13:14:55.104 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc228c22fe3 ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Namespace="kube-system" Pod="coredns-66bc5c9577-vb5hx" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0" Dec 16 13:14:55.164174 containerd[1515]: 2025-12-16 13:14:55.126 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Namespace="kube-system" Pod="coredns-66bc5c9577-vb5hx" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0" Dec 16 13:14:55.164469 containerd[1515]: 2025-12-16 13:14:55.130 [INFO][4355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Namespace="kube-system" Pod="coredns-66bc5c9577-vb5hx" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"511ccaed-67b8-4172-ab54-38134ffd645e", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3", Pod:"coredns-66bc5c9577-vb5hx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc228c22fe3", MAC:"b6:b8:7a:a5:ad:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:55.164469 containerd[1515]: 2025-12-16 13:14:55.153 [INFO][4355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" Namespace="kube-system" Pod="coredns-66bc5c9577-vb5hx" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--vb5hx-eth0" Dec 16 13:14:55.227323 containerd[1515]: time="2025-12-16T13:14:55.226397427Z" level=info msg="connecting to shim f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3" address="unix:///run/containerd/s/ac2b47a78fb4cb1bc10547e602b1f156b2d042f6c5cd757cb3dd88a73e20c6b0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:55.252536 kubelet[2772]: I1216 13:14:55.252475 2772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:14:55.283153 systemd-networkd[1412]: cali04eefb8633a: Link UP Dec 16 13:14:55.283560 systemd-networkd[1412]: cali04eefb8633a: Gained carrier Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:54.984 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0 coredns-66bc5c9577- kube-system d899c118-add8-4703-b1b2-47b81276ce81 820 0 2025-12-16 13:14:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal coredns-66bc5c9577-jpx9n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04eefb8633a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Namespace="kube-system" Pod="coredns-66bc5c9577-jpx9n" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:54.990 [INFO][4359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Namespace="kube-system" Pod="coredns-66bc5c9577-jpx9n" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.059 [INFO][4387] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" HandleID="k8s-pod-network.18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.060 [INFO][4387] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" HandleID="k8s-pod-network.18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", "pod":"coredns-66bc5c9577-jpx9n", "timestamp":"2025-12-16 13:14:55.059498923 +0000 UTC"}, Hostname:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.060 [INFO][4387] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.098 [INFO][4387] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.098 [INFO][4387] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal' Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.161 [INFO][4387] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.176 [INFO][4387] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.190 [INFO][4387] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.198 [INFO][4387] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.211 [INFO][4387] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.212 [INFO][4387] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.218 [INFO][4387] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8 Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.228 [INFO][4387] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.243 [INFO][4387] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.134/26] block=192.168.105.128/26 handle="k8s-pod-network.18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.244 [INFO][4387] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.134/26] handle="k8s-pod-network.18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.245 [INFO][4387] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:14:55.327762 containerd[1515]: 2025-12-16 13:14:55.245 [INFO][4387] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.134/26] IPv6=[] ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" HandleID="k8s-pod-network.18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0" Dec 16 13:14:55.329234 containerd[1515]: 2025-12-16 13:14:55.261 [INFO][4359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Namespace="kube-system" Pod="coredns-66bc5c9577-jpx9n" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d899c118-add8-4703-b1b2-47b81276ce81", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-66bc5c9577-jpx9n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04eefb8633a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:55.329234 containerd[1515]: 2025-12-16 13:14:55.261 [INFO][4359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.134/32] ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Namespace="kube-system" Pod="coredns-66bc5c9577-jpx9n" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0" Dec 16 13:14:55.329234 containerd[1515]: 2025-12-16 13:14:55.261 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04eefb8633a ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Namespace="kube-system" Pod="coredns-66bc5c9577-jpx9n" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0" Dec 16 13:14:55.329234 containerd[1515]: 2025-12-16 13:14:55.287 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Namespace="kube-system" Pod="coredns-66bc5c9577-jpx9n" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0" Dec 16 13:14:55.329520 containerd[1515]: 2025-12-16 13:14:55.289 [INFO][4359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Namespace="kube-system" Pod="coredns-66bc5c9577-jpx9n" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d899c118-add8-4703-b1b2-47b81276ce81", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8", Pod:"coredns-66bc5c9577-jpx9n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04eefb8633a", MAC:"46:6c:fa:e8:7f:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:55.329520 containerd[1515]: 2025-12-16 13:14:55.315 [INFO][4359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" Namespace="kube-system" Pod="coredns-66bc5c9577-jpx9n" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--jpx9n-eth0" Dec 16 13:14:55.365056 systemd[1]: Started cri-containerd-f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3.scope - libcontainer container f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3. Dec 16 13:14:55.372428 systemd-networkd[1412]: calif03d4a8171e: Gained IPv6LL Dec 16 13:14:55.392794 kubelet[2772]: E1216 13:14:55.392507 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" podUID="f96341ec-7be6-4b46-b044-72e177b23759" Dec 16 13:14:55.394558 kubelet[2772]: E1216 13:14:55.393674 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" podUID="b169b293-70bb-47ce-9436-682d8a8e825b" Dec 16 13:14:55.436953 containerd[1515]: time="2025-12-16T13:14:55.436769409Z" level=info msg="connecting to shim 18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8" address="unix:///run/containerd/s/99cd5a563425eb21020f017770235d0834df0a9e0bc67e50c0571d28cf1e7f58" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:55.507166 systemd[1]: Started cri-containerd-18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8.scope - libcontainer container 18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8. Dec 16 13:14:55.580608 containerd[1515]: time="2025-12-16T13:14:55.580504925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vb5hx,Uid:511ccaed-67b8-4172-ab54-38134ffd645e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3\"" Dec 16 13:14:55.591283 containerd[1515]: time="2025-12-16T13:14:55.591224401Z" level=info msg="CreateContainer within sandbox \"f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:14:55.609866 containerd[1515]: time="2025-12-16T13:14:55.608638656Z" level=info msg="Container c7ca115c13f743664d02b980592652138c68e634cd1ec1063fe88d0d6d9f10be: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:55.625117 containerd[1515]: time="2025-12-16T13:14:55.624957783Z" level=info msg="CreateContainer within sandbox \"f44d5b75c5d9d6dd61bb896572fe8f215429d8797591bdaf116e581cf66c04c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7ca115c13f743664d02b980592652138c68e634cd1ec1063fe88d0d6d9f10be\"" Dec 16 13:14:55.628773 containerd[1515]: time="2025-12-16T13:14:55.628722542Z" level=info msg="StartContainer for \"c7ca115c13f743664d02b980592652138c68e634cd1ec1063fe88d0d6d9f10be\"" Dec 16 13:14:55.635466 containerd[1515]: time="2025-12-16T13:14:55.634505834Z" level=info msg="connecting to shim c7ca115c13f743664d02b980592652138c68e634cd1ec1063fe88d0d6d9f10be" address="unix:///run/containerd/s/ac2b47a78fb4cb1bc10547e602b1f156b2d042f6c5cd757cb3dd88a73e20c6b0" protocol=ttrpc version=3 Dec 16 13:14:55.681222 systemd[1]: Started cri-containerd-c7ca115c13f743664d02b980592652138c68e634cd1ec1063fe88d0d6d9f10be.scope - libcontainer container c7ca115c13f743664d02b980592652138c68e634cd1ec1063fe88d0d6d9f10be. Dec 16 13:14:55.694389 containerd[1515]: time="2025-12-16T13:14:55.693582417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jpx9n,Uid:d899c118-add8-4703-b1b2-47b81276ce81,Namespace:kube-system,Attempt:0,} returns sandbox id \"18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8\"" Dec 16 13:14:55.706869 containerd[1515]: time="2025-12-16T13:14:55.706154224Z" level=info msg="CreateContainer within sandbox \"18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:14:55.726447 containerd[1515]: time="2025-12-16T13:14:55.726303051Z" level=info msg="Container dafa85e5489b7168f3724a8a879e128968b800b12c10050c012d126ea647abc8: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:55.743889 containerd[1515]: time="2025-12-16T13:14:55.743827175Z" level=info msg="CreateContainer within sandbox \"18b684fa4bc9291d1811df0d8c758bee985cefacd63966246a63d8a3317208a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dafa85e5489b7168f3724a8a879e128968b800b12c10050c012d126ea647abc8\"" Dec 16 13:14:55.747235 containerd[1515]: time="2025-12-16T13:14:55.747152156Z" level=info msg="StartContainer for \"dafa85e5489b7168f3724a8a879e128968b800b12c10050c012d126ea647abc8\"" Dec 16 13:14:55.751091 containerd[1515]: time="2025-12-16T13:14:55.750868554Z" level=info msg="connecting to shim dafa85e5489b7168f3724a8a879e128968b800b12c10050c012d126ea647abc8" address="unix:///run/containerd/s/99cd5a563425eb21020f017770235d0834df0a9e0bc67e50c0571d28cf1e7f58" protocol=ttrpc version=3 Dec 16 13:14:55.776125 containerd[1515]: time="2025-12-16T13:14:55.776063555Z" level=info msg="StartContainer for \"c7ca115c13f743664d02b980592652138c68e634cd1ec1063fe88d0d6d9f10be\" returns successfully" Dec 16 13:14:55.813473 systemd[1]: Started cri-containerd-dafa85e5489b7168f3724a8a879e128968b800b12c10050c012d126ea647abc8.scope - libcontainer container dafa85e5489b7168f3724a8a879e128968b800b12c10050c012d126ea647abc8. Dec 16 13:14:55.898676 containerd[1515]: time="2025-12-16T13:14:55.897224908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sjcz4,Uid:c5258db0-e1f3-4670-a2ed-9cdfef209601,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:55.901101 containerd[1515]: time="2025-12-16T13:14:55.901036695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569f6b4df6-2lfjk,Uid:92bcad1c-8640-450a-974e-84155204fa1a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:14:55.902678 containerd[1515]: time="2025-12-16T13:14:55.902575434Z" level=info msg="StartContainer for \"dafa85e5489b7168f3724a8a879e128968b800b12c10050c012d126ea647abc8\" returns successfully" Dec 16 13:14:55.905784 containerd[1515]: time="2025-12-16T13:14:55.905747917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c497476dd-q67hl,Uid:a987f064-9fb2-45ff-bff7-ad75d97d3211,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:56.320074 systemd-networkd[1412]: calie5973ec9868: Link UP Dec 16 13:14:56.320445 systemd-networkd[1412]: calie5973ec9868: Gained carrier Dec 16 13:14:56.331247 systemd-networkd[1412]: cali04eefb8633a: Gained IPv6LL Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.100 [INFO][4600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0 calico-kube-controllers-5c497476dd- calico-system a987f064-9fb2-45ff-bff7-ad75d97d3211 821 0 2025-12-16 13:14:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c497476dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal calico-kube-controllers-5c497476dd-q67hl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie5973ec9868 [] [] }} ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Namespace="calico-system" Pod="calico-kube-controllers-5c497476dd-q67hl" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.101 [INFO][4600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Namespace="calico-system" Pod="calico-kube-controllers-5c497476dd-q67hl" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.224 [INFO][4637] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" HandleID="k8s-pod-network.f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.225 [INFO][4637] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" HandleID="k8s-pod-network.f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003520d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", "pod":"calico-kube-controllers-5c497476dd-q67hl", "timestamp":"2025-12-16 13:14:56.224023948 +0000 UTC"}, Hostname:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.226 [INFO][4637] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.226 [INFO][4637] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.226 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal' Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.248 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.255 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.262 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.266 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.271 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.271 [INFO][4637] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.280 [INFO][4637] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179 Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.290 [INFO][4637] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.301 [INFO][4637] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.135/26] block=192.168.105.128/26 handle="k8s-pod-network.f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.302 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.135/26] handle="k8s-pod-network.f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.302 [INFO][4637] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:14:56.357147 containerd[1515]: 2025-12-16 13:14:56.302 [INFO][4637] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.135/26] IPv6=[] ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" HandleID="k8s-pod-network.f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0" Dec 16 13:14:56.361616 containerd[1515]: 2025-12-16 13:14:56.308 [INFO][4600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Namespace="calico-system" Pod="calico-kube-controllers-5c497476dd-q67hl" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0", GenerateName:"calico-kube-controllers-5c497476dd-", Namespace:"calico-system", SelfLink:"", UID:"a987f064-9fb2-45ff-bff7-ad75d97d3211", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c497476dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-5c497476dd-q67hl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie5973ec9868", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:56.361616 containerd[1515]: 2025-12-16 13:14:56.309 [INFO][4600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.135/32] ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Namespace="calico-system" Pod="calico-kube-controllers-5c497476dd-q67hl" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0" Dec 16 13:14:56.361616 containerd[1515]: 2025-12-16 13:14:56.309 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5973ec9868 ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Namespace="calico-system" Pod="calico-kube-controllers-5c497476dd-q67hl" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0" Dec 16 13:14:56.361616 containerd[1515]: 2025-12-16 13:14:56.318 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Namespace="calico-system" Pod="calico-kube-controllers-5c497476dd-q67hl" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0" Dec 16 13:14:56.361616 containerd[1515]: 2025-12-16 13:14:56.319 [INFO][4600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Namespace="calico-system" Pod="calico-kube-controllers-5c497476dd-q67hl" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0", GenerateName:"calico-kube-controllers-5c497476dd-", Namespace:"calico-system", SelfLink:"", UID:"a987f064-9fb2-45ff-bff7-ad75d97d3211", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c497476dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179", Pod:"calico-kube-controllers-5c497476dd-q67hl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie5973ec9868", MAC:"2a:0e:fb:c3:4b:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:56.361616 containerd[1515]: 2025-12-16 13:14:56.353 [INFO][4600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" Namespace="calico-system" Pod="calico-kube-controllers-5c497476dd-q67hl" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--kube--controllers--5c497476dd--q67hl-eth0" Dec 16 13:14:56.443278 containerd[1515]: time="2025-12-16T13:14:56.443159858Z" level=info msg="connecting to shim f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179" address="unix:///run/containerd/s/e1043f5bc7010700b2b10dfdee04ab22dac74a7e1a259e1538cb3df5a43a91ad" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:56.459227 systemd-networkd[1412]: calicc228c22fe3: Gained IPv6LL Dec 16 13:14:56.515759 systemd-networkd[1412]: calif13af2ff506: Link UP Dec 16 13:14:56.525663 kubelet[2772]: I1216 13:14:56.525263 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jpx9n" podStartSLOduration=43.52523529 podStartE2EDuration="43.52523529s" podCreationTimestamp="2025-12-16 13:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:56.457095809 +0000 UTC m=+47.832426428" watchObservedRunningTime="2025-12-16 13:14:56.52523529 +0000 UTC m=+47.900565902" Dec 16 13:14:56.530078 systemd-networkd[1412]: calif13af2ff506: Gained carrier Dec 16 13:14:56.548457 systemd[1]: Started cri-containerd-f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179.scope - libcontainer container f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179. Dec 16 13:14:56.587508 kubelet[2772]: I1216 13:14:56.586722 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vb5hx" podStartSLOduration=43.586547345 podStartE2EDuration="43.586547345s" podCreationTimestamp="2025-12-16 13:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:56.523795718 +0000 UTC m=+47.899126334" watchObservedRunningTime="2025-12-16 13:14:56.586547345 +0000 UTC m=+47.961877962" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.109 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0 calico-apiserver-569f6b4df6- calico-apiserver 92bcad1c-8640-450a-974e-84155204fa1a 829 0 2025-12-16 13:14:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:569f6b4df6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal calico-apiserver-569f6b4df6-2lfjk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif13af2ff506 [] [] }} ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Namespace="calico-apiserver" Pod="calico-apiserver-569f6b4df6-2lfjk" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.110 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Namespace="calico-apiserver" Pod="calico-apiserver-569f6b4df6-2lfjk" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.238 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" HandleID="k8s-pod-network.1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.241 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" HandleID="k8s-pod-network.1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", "pod":"calico-apiserver-569f6b4df6-2lfjk", "timestamp":"2025-12-16 13:14:56.238087392 +0000 UTC"}, Hostname:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.241 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.302 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.303 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal' Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.350 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.366 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.379 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.384 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.391 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.392 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.401 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.430 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.467 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.136/26] block=192.168.105.128/26 handle="k8s-pod-network.1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.469 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.136/26] handle="k8s-pod-network.1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.469 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:14:56.595855 containerd[1515]: 2025-12-16 13:14:56.472 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.136/26] IPv6=[] ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" HandleID="k8s-pod-network.1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0" Dec 16 13:14:56.598240 containerd[1515]: 2025-12-16 13:14:56.488 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Namespace="calico-apiserver" Pod="calico-apiserver-569f6b4df6-2lfjk" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0", GenerateName:"calico-apiserver-569f6b4df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"92bcad1c-8640-450a-974e-84155204fa1a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569f6b4df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-569f6b4df6-2lfjk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif13af2ff506", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:56.598240 containerd[1515]: 2025-12-16 13:14:56.491 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.136/32] ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Namespace="calico-apiserver" Pod="calico-apiserver-569f6b4df6-2lfjk" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0" Dec 16 13:14:56.598240 containerd[1515]: 2025-12-16 13:14:56.491 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif13af2ff506 ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Namespace="calico-apiserver" Pod="calico-apiserver-569f6b4df6-2lfjk" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0" Dec 16 13:14:56.598240 containerd[1515]: 2025-12-16 13:14:56.534 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Namespace="calico-apiserver" Pod="calico-apiserver-569f6b4df6-2lfjk" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0" Dec 16 13:14:56.598240 containerd[1515]: 2025-12-16 13:14:56.537 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Namespace="calico-apiserver" Pod="calico-apiserver-569f6b4df6-2lfjk" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0", GenerateName:"calico-apiserver-569f6b4df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"92bcad1c-8640-450a-974e-84155204fa1a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569f6b4df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e", Pod:"calico-apiserver-569f6b4df6-2lfjk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif13af2ff506", MAC:"56:5a:a9:c3:0e:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:56.598240 containerd[1515]: 2025-12-16 13:14:56.583 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" Namespace="calico-apiserver" Pod="calico-apiserver-569f6b4df6-2lfjk" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-calico--apiserver--569f6b4df6--2lfjk-eth0" Dec 16 13:14:56.693052 containerd[1515]: time="2025-12-16T13:14:56.692989655Z" level=info msg="connecting to shim 1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e" address="unix:///run/containerd/s/b10e22a2d08e80a84df1c0b5986a0859fc51d609298957eefbb265d19141d5c0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:56.772381 systemd[1]: Started cri-containerd-1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e.scope - libcontainer container 1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e. Dec 16 13:14:56.787359 systemd-networkd[1412]: cali3f092656282: Link UP Dec 16 13:14:56.791191 systemd-networkd[1412]: cali3f092656282: Gained carrier Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.168 [INFO][4595] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0 goldmane-7c778bb748- calico-system c5258db0-e1f3-4670-a2ed-9cdfef209601 828 0 2025-12-16 13:14:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal goldmane-7c778bb748-sjcz4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3f092656282 [] [] }} ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Namespace="calico-system" Pod="goldmane-7c778bb748-sjcz4" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.169 [INFO][4595] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Namespace="calico-system" Pod="goldmane-7c778bb748-sjcz4" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.243 [INFO][4653] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" HandleID="k8s-pod-network.2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.244 [INFO][4653] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" HandleID="k8s-pod-network.2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fd20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", "pod":"goldmane-7c778bb748-sjcz4", "timestamp":"2025-12-16 13:14:56.243131825 +0000 UTC"}, Hostname:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.244 [INFO][4653] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.469 [INFO][4653] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.473 [INFO][4653] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal' Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.588 [INFO][4653] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.603 [INFO][4653] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.631 [INFO][4653] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.661 [INFO][4653] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.706 [INFO][4653] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.708 [INFO][4653] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.718 [INFO][4653] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24 Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.729 [INFO][4653] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.745 [INFO][4653] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.137/26] block=192.168.105.128/26 handle="k8s-pod-network.2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.745 [INFO][4653] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.137/26] handle="k8s-pod-network.2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" host="ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal" Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.745 [INFO][4653] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:14:56.828235 containerd[1515]: 2025-12-16 13:14:56.746 [INFO][4653] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.137/26] IPv6=[] ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" HandleID="k8s-pod-network.2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Workload="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0" Dec 16 13:14:56.832497 containerd[1515]: 2025-12-16 13:14:56.752 [INFO][4595] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Namespace="calico-system" Pod="goldmane-7c778bb748-sjcz4" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c5258db0-e1f3-4670-a2ed-9cdfef209601", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-7c778bb748-sjcz4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f092656282", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:56.832497 containerd[1515]: 2025-12-16 13:14:56.756 [INFO][4595] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.137/32] ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Namespace="calico-system" Pod="goldmane-7c778bb748-sjcz4" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0" Dec 16 13:14:56.832497 containerd[1515]: 2025-12-16 13:14:56.757 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f092656282 ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Namespace="calico-system" Pod="goldmane-7c778bb748-sjcz4" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0" Dec 16 13:14:56.832497 containerd[1515]: 2025-12-16 13:14:56.792 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Namespace="calico-system" Pod="goldmane-7c778bb748-sjcz4" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0" Dec 16 13:14:56.832497 containerd[1515]: 2025-12-16 13:14:56.798 [INFO][4595] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Namespace="calico-system" Pod="goldmane-7c778bb748-sjcz4" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c5258db0-e1f3-4670-a2ed-9cdfef209601", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-c2f2d7bfbbd4b91afa1b.c.flatcar-212911.internal", ContainerID:"2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24", Pod:"goldmane-7c778bb748-sjcz4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f092656282", MAC:"4e:26:0a:ab:ca:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:14:56.832497 containerd[1515]: 2025-12-16 13:14:56.821 [INFO][4595] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" Namespace="calico-system" Pod="goldmane-7c778bb748-sjcz4" WorkloadEndpoint="ci--4459--2--2--c2f2d7bfbbd4b91afa1b.c.flatcar--212911.internal-k8s-goldmane--7c778bb748--sjcz4-eth0" Dec 16 13:14:56.897654 containerd[1515]: time="2025-12-16T13:14:56.895405030Z" level=info msg="connecting to shim 2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24" address="unix:///run/containerd/s/d93891697729a402c3ff3e5d8e63de85a8927dc1072146da03fd335f117cc168" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:56.989163 systemd[1]: Started cri-containerd-2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24.scope - libcontainer container 2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24. Dec 16 13:14:57.106832 containerd[1515]: time="2025-12-16T13:14:57.106770634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sjcz4,Uid:c5258db0-e1f3-4670-a2ed-9cdfef209601,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ccc7686bbd18324da272094dd6616327ff128ba057c7ae7cc4498f7fdcf0d24\"" Dec 16 13:14:57.111899 containerd[1515]: time="2025-12-16T13:14:57.111844641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:14:57.177711 containerd[1515]: time="2025-12-16T13:14:57.177531618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c497476dd-q67hl,Uid:a987f064-9fb2-45ff-bff7-ad75d97d3211,Namespace:calico-system,Attempt:0,} returns sandbox id \"f243c64b45803223cb88df6ea800132f683dc787a2de9888b0f815cbea623179\"" Dec 16 13:14:57.218477 containerd[1515]: time="2025-12-16T13:14:57.218352750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569f6b4df6-2lfjk,Uid:92bcad1c-8640-450a-974e-84155204fa1a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1b562db7fdc656733a521a6ccc208a172664afeee1cfeab577e0c8d3a5e3f23e\"" Dec 16 13:14:57.296423 containerd[1515]: time="2025-12-16T13:14:57.296354096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:14:57.297977 containerd[1515]: time="2025-12-16T13:14:57.297866422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:14:57.298610 containerd[1515]: time="2025-12-16T13:14:57.297966336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:14:57.299534 kubelet[2772]: E1216 13:14:57.299348 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:14:57.299764 kubelet[2772]: E1216 13:14:57.299510 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:14:57.300706 kubelet[2772]: E1216 13:14:57.300570 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-sjcz4_calico-system(c5258db0-e1f3-4670-a2ed-9cdfef209601): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:14:57.301087 containerd[1515]: time="2025-12-16T13:14:57.300675578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:14:57.301434 kubelet[2772]: E1216 13:14:57.301222 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sjcz4" podUID="c5258db0-e1f3-4670-a2ed-9cdfef209601" Dec 16 13:14:57.432007 kubelet[2772]: E1216 13:14:57.430760 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sjcz4" podUID="c5258db0-e1f3-4670-a2ed-9cdfef209601" Dec 16 13:14:57.488550 containerd[1515]: time="2025-12-16T13:14:57.488482916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:14:57.490350 containerd[1515]: time="2025-12-16T13:14:57.490256642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:14:57.490727 containerd[1515]: time="2025-12-16T13:14:57.490285124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:14:57.491982 kubelet[2772]: E1216 13:14:57.490623 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:14:57.491982 kubelet[2772]: E1216 13:14:57.490680 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:14:57.491982 kubelet[2772]: E1216 13:14:57.490977 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5c497476dd-q67hl_calico-system(a987f064-9fb2-45ff-bff7-ad75d97d3211): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:14:57.491982 kubelet[2772]: E1216 13:14:57.491500 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" podUID="a987f064-9fb2-45ff-bff7-ad75d97d3211" Dec 16 13:14:57.493616 containerd[1515]: time="2025-12-16T13:14:57.491886202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:14:57.666999 containerd[1515]: time="2025-12-16T13:14:57.666896544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:14:57.668764 containerd[1515]: time="2025-12-16T13:14:57.668526918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:14:57.668943 containerd[1515]: time="2025-12-16T13:14:57.668590189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:14:57.669342 kubelet[2772]: E1216 13:14:57.669224 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:14:57.669342 kubelet[2772]: E1216 13:14:57.669288 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:14:57.670932 kubelet[2772]: E1216 13:14:57.669411 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-569f6b4df6-2lfjk_calico-apiserver(92bcad1c-8640-450a-974e-84155204fa1a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:14:57.670932 kubelet[2772]: E1216 13:14:57.669471 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" podUID="92bcad1c-8640-450a-974e-84155204fa1a" Dec 16 13:14:57.995155 systemd-networkd[1412]: cali3f092656282: Gained IPv6LL Dec 16 13:14:58.379129 systemd-networkd[1412]: calie5973ec9868: Gained IPv6LL Dec 16 13:14:58.440993 kubelet[2772]: E1216 13:14:58.440526 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" podUID="a987f064-9fb2-45ff-bff7-ad75d97d3211" Dec 16 13:14:58.441946 kubelet[2772]: E1216 13:14:58.441255 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" podUID="92bcad1c-8640-450a-974e-84155204fa1a" Dec 16 13:14:58.441946 kubelet[2772]: E1216 13:14:58.441448 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sjcz4" podUID="c5258db0-e1f3-4670-a2ed-9cdfef209601" Dec 16 13:14:58.444166 systemd-networkd[1412]: calif13af2ff506: Gained IPv6LL Dec 16 13:15:01.291975 ntpd[1646]: Listen normally on 6 vxlan.calico 192.168.105.128:123 Dec 16 13:15:01.292111 ntpd[1646]: Listen normally on 7 caliada50a011f7 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 6 vxlan.calico 192.168.105.128:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 7 caliada50a011f7 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 8 cali0d689860c77 [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 9 vxlan.calico [fe80::64e7:34ff:fee3:238a%6]:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 10 calibeff0fb078f [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 11 calif03d4a8171e [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 12 calicc228c22fe3 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 13 cali04eefb8633a [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 14 calie5973ec9868 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 15 calif13af2ff506 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 13:15:01.292705 ntpd[1646]: 16 Dec 13:15:01 ntpd[1646]: Listen normally on 16 cali3f092656282 [fe80::ecee:eeff:feee:eeee%15]:123 Dec 16 13:15:01.292163 ntpd[1646]: Listen normally on 8 cali0d689860c77 [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 13:15:01.292213 ntpd[1646]: Listen normally on 9 vxlan.calico [fe80::64e7:34ff:fee3:238a%6]:123 Dec 16 13:15:01.292254 ntpd[1646]: Listen normally on 10 calibeff0fb078f [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 13:15:01.292296 ntpd[1646]: Listen normally on 11 calif03d4a8171e [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 13:15:01.292337 ntpd[1646]: Listen normally on 12 calicc228c22fe3 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 13:15:01.292385 ntpd[1646]: Listen normally on 13 cali04eefb8633a [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 13:15:01.292425 ntpd[1646]: Listen normally on 14 calie5973ec9868 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 13:15:01.292467 ntpd[1646]: Listen normally on 15 calif13af2ff506 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 13:15:01.292505 ntpd[1646]: Listen normally on 16 cali3f092656282 [fe80::ecee:eeff:feee:eeee%15]:123 Dec 16 13:15:04.894759 containerd[1515]: time="2025-12-16T13:15:04.894132117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:15:05.060586 containerd[1515]: time="2025-12-16T13:15:05.060343277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:05.062295 containerd[1515]: time="2025-12-16T13:15:05.062225052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:15:05.062646 containerd[1515]: time="2025-12-16T13:15:05.062264967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:15:05.063119 kubelet[2772]: E1216 13:15:05.062984 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:15:05.064346 kubelet[2772]: E1216 13:15:05.063666 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:15:05.064346 kubelet[2772]: E1216 13:15:05.063884 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-69dccbfc8-4ng2l_calico-system(4495a83f-d804-4460-ba96-c4bb7a122932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:05.067559 containerd[1515]: time="2025-12-16T13:15:05.067127787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:15:05.245209 containerd[1515]: time="2025-12-16T13:15:05.244617197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:05.246321 containerd[1515]: time="2025-12-16T13:15:05.246101637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:15:05.246321 containerd[1515]: time="2025-12-16T13:15:05.246225986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:15:05.248140 kubelet[2772]: E1216 13:15:05.248081 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:15:05.248256 kubelet[2772]: E1216 13:15:05.248151 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:15:05.248330 kubelet[2772]: E1216 13:15:05.248264 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-69dccbfc8-4ng2l_calico-system(4495a83f-d804-4460-ba96-c4bb7a122932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:05.248381 kubelet[2772]: E1216 13:15:05.248333 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69dccbfc8-4ng2l" podUID="4495a83f-d804-4460-ba96-c4bb7a122932" Dec 16 13:15:07.892402 containerd[1515]: time="2025-12-16T13:15:07.892323000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:15:08.056304 containerd[1515]: time="2025-12-16T13:15:08.056240468Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:08.058027 containerd[1515]: time="2025-12-16T13:15:08.057965057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:15:08.058152 containerd[1515]: time="2025-12-16T13:15:08.058092683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:15:08.059936 kubelet[2772]: E1216 13:15:08.059156 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:15:08.059936 kubelet[2772]: E1216 13:15:08.059224 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:15:08.059936 kubelet[2772]: E1216 13:15:08.059321 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-z5rm2_calico-system(a7531109-4d4b-4a8c-8928-f52ad25f6b55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:08.062664 containerd[1515]: time="2025-12-16T13:15:08.062608866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:15:08.224212 containerd[1515]: time="2025-12-16T13:15:08.224047653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:08.225965 containerd[1515]: time="2025-12-16T13:15:08.225835818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:15:08.226094 containerd[1515]: time="2025-12-16T13:15:08.226029402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:15:08.226335 kubelet[2772]: E1216 13:15:08.226276 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:15:08.226436 kubelet[2772]: E1216 13:15:08.226364 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:15:08.226937 kubelet[2772]: E1216 13:15:08.226483 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-z5rm2_calico-system(a7531109-4d4b-4a8c-8928-f52ad25f6b55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:08.226937 kubelet[2772]: E1216 13:15:08.226557 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:15:09.892987 containerd[1515]: time="2025-12-16T13:15:09.892436912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:15:10.057471 containerd[1515]: time="2025-12-16T13:15:10.056620825Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:10.058357 containerd[1515]: time="2025-12-16T13:15:10.058306267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:15:10.058599 containerd[1515]: time="2025-12-16T13:15:10.058512818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:10.059928 kubelet[2772]: E1216 13:15:10.059102 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:10.059928 kubelet[2772]: E1216 13:15:10.059172 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:10.059928 kubelet[2772]: E1216 13:15:10.059394 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-864c65cd8b-nvfvd_calico-apiserver(f96341ec-7be6-4b46-b044-72e177b23759): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:10.059928 kubelet[2772]: E1216 13:15:10.059451 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" podUID="f96341ec-7be6-4b46-b044-72e177b23759" Dec 16 13:15:10.062571 containerd[1515]: time="2025-12-16T13:15:10.061459217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:15:10.404589 containerd[1515]: time="2025-12-16T13:15:10.404527813Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:10.406231 containerd[1515]: time="2025-12-16T13:15:10.406070765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:15:10.406231 containerd[1515]: time="2025-12-16T13:15:10.406134207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:10.406770 kubelet[2772]: E1216 13:15:10.406650 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:10.407002 kubelet[2772]: E1216 13:15:10.406744 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:10.407258 kubelet[2772]: E1216 13:15:10.407141 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-864c65cd8b-5wmt7_calico-apiserver(b169b293-70bb-47ce-9436-682d8a8e825b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:10.407258 kubelet[2772]: E1216 13:15:10.407217 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" podUID="b169b293-70bb-47ce-9436-682d8a8e825b" Dec 16 13:15:11.248184 systemd[1]: Started sshd@7-10.128.0.75:22-139.178.68.195:58572.service - OpenSSH per-connection server daemon (139.178.68.195:58572). Dec 16 13:15:11.567743 sshd[4882]: Accepted publickey for core from 139.178.68.195 port 58572 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:11.569925 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:11.578975 systemd-logind[1505]: New session 8 of user core. Dec 16 13:15:11.587364 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:15:11.896187 containerd[1515]: time="2025-12-16T13:15:11.893712576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:15:11.964753 sshd[4885]: Connection closed by 139.178.68.195 port 58572 Dec 16 13:15:11.965270 sshd-session[4882]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:11.976402 systemd[1]: sshd@7-10.128.0.75:22-139.178.68.195:58572.service: Deactivated successfully. Dec 16 13:15:11.976732 systemd-logind[1505]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:15:11.984721 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:15:11.994525 systemd-logind[1505]: Removed session 8. Dec 16 13:15:12.063949 containerd[1515]: time="2025-12-16T13:15:12.063638397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:12.065417 containerd[1515]: time="2025-12-16T13:15:12.065354989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:15:12.065997 containerd[1515]: time="2025-12-16T13:15:12.065585072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:15:12.066229 kubelet[2772]: E1216 13:15:12.066176 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:15:12.067342 kubelet[2772]: E1216 13:15:12.066382 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:15:12.067342 kubelet[2772]: E1216 13:15:12.066628 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5c497476dd-q67hl_calico-system(a987f064-9fb2-45ff-bff7-ad75d97d3211): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:12.067342 kubelet[2772]: E1216 13:15:12.066689 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" podUID="a987f064-9fb2-45ff-bff7-ad75d97d3211" Dec 16 13:15:12.068305 containerd[1515]: time="2025-12-16T13:15:12.068161159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:15:12.257026 containerd[1515]: time="2025-12-16T13:15:12.255779939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:12.258744 containerd[1515]: time="2025-12-16T13:15:12.258597303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:15:12.258744 containerd[1515]: time="2025-12-16T13:15:12.258627975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:12.259952 kubelet[2772]: E1216 13:15:12.259242 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:12.259952 kubelet[2772]: E1216 13:15:12.259332 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:12.261013 kubelet[2772]: E1216 13:15:12.260961 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-569f6b4df6-2lfjk_calico-apiserver(92bcad1c-8640-450a-974e-84155204fa1a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:12.261110 kubelet[2772]: E1216 13:15:12.261050 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" podUID="92bcad1c-8640-450a-974e-84155204fa1a" Dec 16 13:15:13.892665 containerd[1515]: time="2025-12-16T13:15:13.891162323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:15:14.069334 containerd[1515]: time="2025-12-16T13:15:14.069269131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:14.070927 containerd[1515]: time="2025-12-16T13:15:14.070817818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:15:14.071068 containerd[1515]: time="2025-12-16T13:15:14.070830950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:14.071499 kubelet[2772]: E1216 13:15:14.071408 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:15:14.072587 kubelet[2772]: E1216 13:15:14.071513 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:15:14.072587 kubelet[2772]: E1216 13:15:14.071721 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-sjcz4_calico-system(c5258db0-e1f3-4670-a2ed-9cdfef209601): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:14.072587 kubelet[2772]: E1216 13:15:14.071772 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sjcz4" podUID="c5258db0-e1f3-4670-a2ed-9cdfef209601" Dec 16 13:15:16.893750 kubelet[2772]: E1216 13:15:16.893525 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69dccbfc8-4ng2l" podUID="4495a83f-d804-4460-ba96-c4bb7a122932" Dec 16 13:15:17.023281 systemd[1]: Started sshd@8-10.128.0.75:22-139.178.68.195:58586.service - OpenSSH per-connection server daemon (139.178.68.195:58586). Dec 16 13:15:17.359938 sshd[4905]: Accepted publickey for core from 139.178.68.195 port 58586 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:17.362956 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:17.373655 systemd-logind[1505]: New session 9 of user core. Dec 16 13:15:17.382969 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:15:17.722299 sshd[4909]: Connection closed by 139.178.68.195 port 58586 Dec 16 13:15:17.723281 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:17.732973 systemd-logind[1505]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:15:17.734708 systemd[1]: sshd@8-10.128.0.75:22-139.178.68.195:58586.service: Deactivated successfully. Dec 16 13:15:17.740813 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:15:17.746928 systemd-logind[1505]: Removed session 9. Dec 16 13:15:19.896953 kubelet[2772]: E1216 13:15:19.896823 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:15:22.782393 systemd[1]: Started sshd@9-10.128.0.75:22-139.178.68.195:50624.service - OpenSSH per-connection server daemon (139.178.68.195:50624). Dec 16 13:15:22.895281 kubelet[2772]: E1216 13:15:22.894433 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" podUID="a987f064-9fb2-45ff-bff7-ad75d97d3211" Dec 16 13:15:22.898416 kubelet[2772]: E1216 13:15:22.898225 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" podUID="f96341ec-7be6-4b46-b044-72e177b23759" Dec 16 13:15:23.108322 sshd[4924]: Accepted publickey for core from 139.178.68.195 port 50624 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:23.110816 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:23.120985 systemd-logind[1505]: New session 10 of user core. Dec 16 13:15:23.127144 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:15:23.454474 sshd[4927]: Connection closed by 139.178.68.195 port 50624 Dec 16 13:15:23.456485 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:23.466067 systemd[1]: sshd@9-10.128.0.75:22-139.178.68.195:50624.service: Deactivated successfully. Dec 16 13:15:23.470478 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:15:23.472232 systemd-logind[1505]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:15:23.474898 systemd-logind[1505]: Removed session 10. Dec 16 13:15:23.518447 systemd[1]: Started sshd@10-10.128.0.75:22-139.178.68.195:50640.service - OpenSSH per-connection server daemon (139.178.68.195:50640). Dec 16 13:15:23.861175 sshd[4940]: Accepted publickey for core from 139.178.68.195 port 50640 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:23.862807 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:23.870587 systemd-logind[1505]: New session 11 of user core. Dec 16 13:15:23.880271 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:15:24.272517 sshd[4943]: Connection closed by 139.178.68.195 port 50640 Dec 16 13:15:24.274243 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:24.285000 systemd[1]: sshd@10-10.128.0.75:22-139.178.68.195:50640.service: Deactivated successfully. Dec 16 13:15:24.290844 systemd-logind[1505]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:15:24.292297 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:15:24.300130 systemd-logind[1505]: Removed session 11. Dec 16 13:15:24.338785 systemd[1]: Started sshd@11-10.128.0.75:22-139.178.68.195:50652.service - OpenSSH per-connection server daemon (139.178.68.195:50652). Dec 16 13:15:24.684565 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 50652 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:24.688115 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:24.704281 systemd-logind[1505]: New session 12 of user core. Dec 16 13:15:24.709410 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:15:24.905375 kubelet[2772]: E1216 13:15:24.904146 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" podUID="b169b293-70bb-47ce-9436-682d8a8e825b" Dec 16 13:15:25.068158 sshd[4956]: Connection closed by 139.178.68.195 port 50652 Dec 16 13:15:25.069242 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:25.080858 systemd[1]: sshd@11-10.128.0.75:22-139.178.68.195:50652.service: Deactivated successfully. Dec 16 13:15:25.085619 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:15:25.089583 systemd-logind[1505]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:15:25.095426 systemd-logind[1505]: Removed session 12. Dec 16 13:15:25.890608 kubelet[2772]: E1216 13:15:25.890430 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sjcz4" podUID="c5258db0-e1f3-4670-a2ed-9cdfef209601" Dec 16 13:15:26.891496 kubelet[2772]: E1216 13:15:26.891406 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" podUID="92bcad1c-8640-450a-974e-84155204fa1a" Dec 16 13:15:27.893693 containerd[1515]: time="2025-12-16T13:15:27.893257055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:15:28.078940 containerd[1515]: time="2025-12-16T13:15:28.078756535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:28.080736 containerd[1515]: time="2025-12-16T13:15:28.080554193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:15:28.081185 containerd[1515]: time="2025-12-16T13:15:28.080717028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:15:28.082501 kubelet[2772]: E1216 13:15:28.082432 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:15:28.083126 kubelet[2772]: E1216 13:15:28.082545 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:15:28.083216 kubelet[2772]: E1216 13:15:28.083076 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-69dccbfc8-4ng2l_calico-system(4495a83f-d804-4460-ba96-c4bb7a122932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:28.085099 containerd[1515]: time="2025-12-16T13:15:28.084923697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:15:28.245375 containerd[1515]: time="2025-12-16T13:15:28.245176774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:28.246989 containerd[1515]: time="2025-12-16T13:15:28.246928231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:15:28.247155 containerd[1515]: time="2025-12-16T13:15:28.246932467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:15:28.247436 kubelet[2772]: E1216 13:15:28.247372 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:15:28.247529 kubelet[2772]: E1216 13:15:28.247451 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:15:28.248679 kubelet[2772]: E1216 13:15:28.247568 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-69dccbfc8-4ng2l_calico-system(4495a83f-d804-4460-ba96-c4bb7a122932): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:28.248679 kubelet[2772]: E1216 13:15:28.247684 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69dccbfc8-4ng2l" podUID="4495a83f-d804-4460-ba96-c4bb7a122932" Dec 16 13:15:30.122876 systemd[1]: Started sshd@12-10.128.0.75:22-139.178.68.195:50666.service - OpenSSH per-connection server daemon (139.178.68.195:50666). Dec 16 13:15:30.458050 sshd[4999]: Accepted publickey for core from 139.178.68.195 port 50666 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:30.460752 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:30.473305 systemd-logind[1505]: New session 13 of user core. Dec 16 13:15:30.478136 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:15:30.815639 sshd[5002]: Connection closed by 139.178.68.195 port 50666 Dec 16 13:15:30.817403 sshd-session[4999]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:30.832154 systemd[1]: sshd@12-10.128.0.75:22-139.178.68.195:50666.service: Deactivated successfully. Dec 16 13:15:30.838160 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:15:30.840966 systemd-logind[1505]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:15:30.844481 systemd-logind[1505]: Removed session 13. Dec 16 13:15:32.899289 containerd[1515]: time="2025-12-16T13:15:32.899238955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:15:33.078583 containerd[1515]: time="2025-12-16T13:15:33.078341503Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:33.079924 containerd[1515]: time="2025-12-16T13:15:33.079801998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:15:33.080134 containerd[1515]: time="2025-12-16T13:15:33.080079616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:15:33.080653 kubelet[2772]: E1216 13:15:33.080529 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:15:33.080653 kubelet[2772]: E1216 13:15:33.080616 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:15:33.081933 kubelet[2772]: E1216 13:15:33.081877 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-z5rm2_calico-system(a7531109-4d4b-4a8c-8928-f52ad25f6b55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:33.084684 containerd[1515]: time="2025-12-16T13:15:33.084333909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:15:33.252032 containerd[1515]: time="2025-12-16T13:15:33.251160497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:33.253751 containerd[1515]: time="2025-12-16T13:15:33.253585257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:15:33.253751 containerd[1515]: time="2025-12-16T13:15:33.253709062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:15:33.254536 kubelet[2772]: E1216 13:15:33.254233 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:15:33.254536 kubelet[2772]: E1216 13:15:33.254295 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:15:33.255175 kubelet[2772]: E1216 13:15:33.254768 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-z5rm2_calico-system(a7531109-4d4b-4a8c-8928-f52ad25f6b55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:33.255503 kubelet[2772]: E1216 13:15:33.255450 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:15:34.892504 containerd[1515]: time="2025-12-16T13:15:34.892141409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:15:35.085060 containerd[1515]: time="2025-12-16T13:15:35.084969492Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:35.086780 containerd[1515]: time="2025-12-16T13:15:35.086688304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:15:35.087165 containerd[1515]: time="2025-12-16T13:15:35.086743109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:35.089199 kubelet[2772]: E1216 13:15:35.087431 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:35.089199 kubelet[2772]: E1216 13:15:35.087490 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:35.089199 kubelet[2772]: E1216 13:15:35.087597 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-864c65cd8b-nvfvd_calico-apiserver(f96341ec-7be6-4b46-b044-72e177b23759): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:35.089199 kubelet[2772]: E1216 13:15:35.087663 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" podUID="f96341ec-7be6-4b46-b044-72e177b23759" Dec 16 13:15:35.871351 systemd[1]: Started sshd@13-10.128.0.75:22-139.178.68.195:38408.service - OpenSSH per-connection server daemon (139.178.68.195:38408). Dec 16 13:15:36.193866 sshd[5021]: Accepted publickey for core from 139.178.68.195 port 38408 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:36.197155 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:36.206978 systemd-logind[1505]: New session 14 of user core. Dec 16 13:15:36.213155 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:15:36.533635 sshd[5026]: Connection closed by 139.178.68.195 port 38408 Dec 16 13:15:36.536622 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:36.548237 systemd[1]: sshd@13-10.128.0.75:22-139.178.68.195:38408.service: Deactivated successfully. Dec 16 13:15:36.555770 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:15:36.557677 systemd-logind[1505]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:15:36.561809 systemd-logind[1505]: Removed session 14. Dec 16 13:15:36.895686 containerd[1515]: time="2025-12-16T13:15:36.894875625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:15:37.077233 containerd[1515]: time="2025-12-16T13:15:37.076983608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:37.078663 containerd[1515]: time="2025-12-16T13:15:37.078457999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:15:37.078663 containerd[1515]: time="2025-12-16T13:15:37.078501002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:37.078918 kubelet[2772]: E1216 13:15:37.078863 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:37.080157 kubelet[2772]: E1216 13:15:37.080034 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:37.080452 kubelet[2772]: E1216 13:15:37.080378 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-864c65cd8b-5wmt7_calico-apiserver(b169b293-70bb-47ce-9436-682d8a8e825b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:37.080639 kubelet[2772]: E1216 13:15:37.080436 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" podUID="b169b293-70bb-47ce-9436-682d8a8e825b" Dec 16 13:15:37.892498 containerd[1515]: time="2025-12-16T13:15:37.892421929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:15:38.113362 containerd[1515]: time="2025-12-16T13:15:38.113289033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:38.116739 containerd[1515]: time="2025-12-16T13:15:38.116546091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:15:38.116739 containerd[1515]: time="2025-12-16T13:15:38.116684937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:15:38.117156 kubelet[2772]: E1216 13:15:38.117068 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:15:38.117636 kubelet[2772]: E1216 13:15:38.117188 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:15:38.117636 kubelet[2772]: E1216 13:15:38.117431 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5c497476dd-q67hl_calico-system(a987f064-9fb2-45ff-bff7-ad75d97d3211): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:38.118020 kubelet[2772]: E1216 13:15:38.117965 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" podUID="a987f064-9fb2-45ff-bff7-ad75d97d3211" Dec 16 13:15:38.894765 containerd[1515]: time="2025-12-16T13:15:38.894573374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:15:39.125887 containerd[1515]: time="2025-12-16T13:15:39.125640113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:39.128182 containerd[1515]: time="2025-12-16T13:15:39.128108777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:39.128364 containerd[1515]: time="2025-12-16T13:15:39.128170811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:15:39.128616 kubelet[2772]: E1216 13:15:39.128499 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:39.128616 kubelet[2772]: E1216 13:15:39.128592 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:39.129677 kubelet[2772]: E1216 13:15:39.128706 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-569f6b4df6-2lfjk_calico-apiserver(92bcad1c-8640-450a-974e-84155204fa1a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:39.129677 kubelet[2772]: E1216 13:15:39.128762 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" podUID="92bcad1c-8640-450a-974e-84155204fa1a" Dec 16 13:15:39.891424 kubelet[2772]: E1216 13:15:39.891285 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69dccbfc8-4ng2l" podUID="4495a83f-d804-4460-ba96-c4bb7a122932" Dec 16 13:15:40.897305 containerd[1515]: time="2025-12-16T13:15:40.897203596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:15:41.064816 containerd[1515]: time="2025-12-16T13:15:41.064753370Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:41.066459 containerd[1515]: time="2025-12-16T13:15:41.066398855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:15:41.066619 containerd[1515]: time="2025-12-16T13:15:41.066518148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:41.066964 kubelet[2772]: E1216 13:15:41.066899 2772 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:15:41.067465 kubelet[2772]: E1216 13:15:41.066980 2772 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:15:41.067465 kubelet[2772]: E1216 13:15:41.067086 2772 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-sjcz4_calico-system(c5258db0-e1f3-4670-a2ed-9cdfef209601): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:41.067465 kubelet[2772]: E1216 13:15:41.067138 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sjcz4" podUID="c5258db0-e1f3-4670-a2ed-9cdfef209601" Dec 16 13:15:41.590294 systemd[1]: Started sshd@14-10.128.0.75:22-139.178.68.195:32920.service - OpenSSH per-connection server daemon (139.178.68.195:32920). Dec 16 13:15:41.925051 sshd[5040]: Accepted publickey for core from 139.178.68.195 port 32920 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:41.926220 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:41.943011 systemd-logind[1505]: New session 15 of user core. Dec 16 13:15:41.947097 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:15:42.276030 sshd[5043]: Connection closed by 139.178.68.195 port 32920 Dec 16 13:15:42.276062 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:42.289576 systemd-logind[1505]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:15:42.291250 systemd[1]: sshd@14-10.128.0.75:22-139.178.68.195:32920.service: Deactivated successfully. Dec 16 13:15:42.299593 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:15:42.306322 systemd-logind[1505]: Removed session 15. Dec 16 13:15:44.894877 kubelet[2772]: E1216 13:15:44.894801 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:15:47.332301 systemd[1]: Started sshd@15-10.128.0.75:22-139.178.68.195:32924.service - OpenSSH per-connection server daemon (139.178.68.195:32924). Dec 16 13:15:47.652811 sshd[5058]: Accepted publickey for core from 139.178.68.195 port 32924 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:47.656389 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:47.672078 systemd-logind[1505]: New session 16 of user core. Dec 16 13:15:47.674153 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:15:48.008160 sshd[5061]: Connection closed by 139.178.68.195 port 32924 Dec 16 13:15:48.010628 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:48.021197 systemd[1]: sshd@15-10.128.0.75:22-139.178.68.195:32924.service: Deactivated successfully. Dec 16 13:15:48.026323 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:15:48.030404 systemd-logind[1505]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:15:48.033535 systemd-logind[1505]: Removed session 16. Dec 16 13:15:48.070120 systemd[1]: Started sshd@16-10.128.0.75:22-139.178.68.195:32928.service - OpenSSH per-connection server daemon (139.178.68.195:32928). Dec 16 13:15:48.388450 sshd[5073]: Accepted publickey for core from 139.178.68.195 port 32928 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:48.392946 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:48.404214 systemd-logind[1505]: New session 17 of user core. Dec 16 13:15:48.412046 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:15:48.844358 sshd[5076]: Connection closed by 139.178.68.195 port 32928 Dec 16 13:15:48.846151 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:48.853847 systemd-logind[1505]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:15:48.857580 systemd[1]: sshd@16-10.128.0.75:22-139.178.68.195:32928.service: Deactivated successfully. Dec 16 13:15:48.862804 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:15:48.867452 systemd-logind[1505]: Removed session 17. Dec 16 13:15:48.904226 systemd[1]: Started sshd@17-10.128.0.75:22-139.178.68.195:32942.service - OpenSSH per-connection server daemon (139.178.68.195:32942). Dec 16 13:15:49.228849 sshd[5086]: Accepted publickey for core from 139.178.68.195 port 32942 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:49.232697 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:49.243283 systemd-logind[1505]: New session 18 of user core. Dec 16 13:15:49.250165 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:15:49.893372 kubelet[2772]: E1216 13:15:49.893315 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" podUID="b169b293-70bb-47ce-9436-682d8a8e825b" Dec 16 13:15:50.542023 sshd[5089]: Connection closed by 139.178.68.195 port 32942 Dec 16 13:15:50.542559 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:50.552994 systemd[1]: sshd@17-10.128.0.75:22-139.178.68.195:32942.service: Deactivated successfully. Dec 16 13:15:50.559561 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:15:50.561174 systemd-logind[1505]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:15:50.566187 systemd-logind[1505]: Removed session 18. Dec 16 13:15:50.602037 systemd[1]: Started sshd@18-10.128.0.75:22-139.178.68.195:59968.service - OpenSSH per-connection server daemon (139.178.68.195:59968). Dec 16 13:15:50.892580 kubelet[2772]: E1216 13:15:50.892356 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" podUID="f96341ec-7be6-4b46-b044-72e177b23759" Dec 16 13:15:50.929322 sshd[5104]: Accepted publickey for core from 139.178.68.195 port 59968 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:50.931842 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:50.940460 systemd-logind[1505]: New session 19 of user core. Dec 16 13:15:50.947427 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:15:51.504686 sshd[5107]: Connection closed by 139.178.68.195 port 59968 Dec 16 13:15:51.508236 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:51.520172 systemd-logind[1505]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:15:51.521299 systemd[1]: sshd@18-10.128.0.75:22-139.178.68.195:59968.service: Deactivated successfully. Dec 16 13:15:51.529862 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:15:51.537530 systemd-logind[1505]: Removed session 19. Dec 16 13:15:51.569620 systemd[1]: Started sshd@19-10.128.0.75:22-139.178.68.195:59982.service - OpenSSH per-connection server daemon (139.178.68.195:59982). Dec 16 13:15:51.890936 kubelet[2772]: E1216 13:15:51.890851 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" podUID="92bcad1c-8640-450a-974e-84155204fa1a" Dec 16 13:15:51.911202 sshd[5118]: Accepted publickey for core from 139.178.68.195 port 59982 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:51.915159 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:51.926192 systemd-logind[1505]: New session 20 of user core. Dec 16 13:15:51.935552 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:15:52.249028 sshd[5122]: Connection closed by 139.178.68.195 port 59982 Dec 16 13:15:52.250405 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:52.263229 systemd[1]: sshd@19-10.128.0.75:22-139.178.68.195:59982.service: Deactivated successfully. Dec 16 13:15:52.268615 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:15:52.271865 systemd-logind[1505]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:15:52.280126 systemd-logind[1505]: Removed session 20. Dec 16 13:15:52.894357 kubelet[2772]: E1216 13:15:52.894254 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" podUID="a987f064-9fb2-45ff-bff7-ad75d97d3211" Dec 16 13:15:53.891888 kubelet[2772]: E1216 13:15:53.891750 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sjcz4" podUID="c5258db0-e1f3-4670-a2ed-9cdfef209601" Dec 16 13:15:54.894943 kubelet[2772]: E1216 13:15:54.894856 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69dccbfc8-4ng2l" podUID="4495a83f-d804-4460-ba96-c4bb7a122932" Dec 16 13:15:57.309321 systemd[1]: Started sshd@20-10.128.0.75:22-139.178.68.195:59984.service - OpenSSH per-connection server daemon (139.178.68.195:59984). Dec 16 13:15:57.641515 sshd[5162]: Accepted publickey for core from 139.178.68.195 port 59984 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:15:57.644852 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:57.656101 systemd-logind[1505]: New session 21 of user core. Dec 16 13:15:57.664881 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:15:58.037089 sshd[5165]: Connection closed by 139.178.68.195 port 59984 Dec 16 13:15:58.038129 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:58.046586 systemd[1]: sshd@20-10.128.0.75:22-139.178.68.195:59984.service: Deactivated successfully. Dec 16 13:15:58.052700 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:15:58.057371 systemd-logind[1505]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:15:58.059591 systemd-logind[1505]: Removed session 21. Dec 16 13:15:59.892953 kubelet[2772]: E1216 13:15:59.892683 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z5rm2" podUID="a7531109-4d4b-4a8c-8928-f52ad25f6b55" Dec 16 13:16:03.093276 systemd[1]: Started sshd@21-10.128.0.75:22-139.178.68.195:43580.service - OpenSSH per-connection server daemon (139.178.68.195:43580). Dec 16 13:16:03.420936 sshd[5179]: Accepted publickey for core from 139.178.68.195 port 43580 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:16:03.422460 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:03.435265 systemd-logind[1505]: New session 22 of user core. Dec 16 13:16:03.446152 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:16:03.758771 sshd[5182]: Connection closed by 139.178.68.195 port 43580 Dec 16 13:16:03.759500 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:03.768310 systemd[1]: sshd@21-10.128.0.75:22-139.178.68.195:43580.service: Deactivated successfully. Dec 16 13:16:03.773761 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:16:03.778428 systemd-logind[1505]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:16:03.782511 systemd-logind[1505]: Removed session 22. Dec 16 13:16:03.891249 kubelet[2772]: E1216 13:16:03.890653 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569f6b4df6-2lfjk" podUID="92bcad1c-8640-450a-974e-84155204fa1a" Dec 16 13:16:04.897522 kubelet[2772]: E1216 13:16:04.897455 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-5wmt7" podUID="b169b293-70bb-47ce-9436-682d8a8e825b" Dec 16 13:16:05.892315 kubelet[2772]: E1216 13:16:05.892229 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-864c65cd8b-nvfvd" podUID="f96341ec-7be6-4b46-b044-72e177b23759" Dec 16 13:16:05.893262 kubelet[2772]: E1216 13:16:05.893110 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c497476dd-q67hl" podUID="a987f064-9fb2-45ff-bff7-ad75d97d3211" Dec 16 13:16:06.901405 kubelet[2772]: E1216 13:16:06.901135 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sjcz4" podUID="c5258db0-e1f3-4670-a2ed-9cdfef209601" Dec 16 13:16:06.903223 kubelet[2772]: E1216 13:16:06.903117 2772 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69dccbfc8-4ng2l" podUID="4495a83f-d804-4460-ba96-c4bb7a122932" Dec 16 13:16:08.820380 systemd[1]: Started sshd@22-10.128.0.75:22-139.178.68.195:43594.service - OpenSSH per-connection server daemon (139.178.68.195:43594). Dec 16 13:16:09.144895 sshd[5194]: Accepted publickey for core from 139.178.68.195 port 43594 ssh2: RSA SHA256:v7tIGVPgiyL/ANNw+AyFi/zSKm4wKHk/c+elSzrxSj8 Dec 16 13:16:09.147645 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:09.157975 systemd-logind[1505]: New session 23 of user core. Dec 16 13:16:09.162327 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:16:09.497742 sshd[5199]: Connection closed by 139.178.68.195 port 43594 Dec 16 13:16:09.499204 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:09.509201 systemd-logind[1505]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:16:09.510199 systemd[1]: sshd@22-10.128.0.75:22-139.178.68.195:43594.service: Deactivated successfully. Dec 16 13:16:09.515682 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:16:09.518805 systemd-logind[1505]: Removed session 23.