Jan 23 01:05:24.930599 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:05:24.930625 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:05:24.930638 kernel: BIOS-provided physical RAM map: Jan 23 01:05:24.930646 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:05:24.930653 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 23 01:05:24.930659 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 23 01:05:24.930667 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 23 01:05:24.930674 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 23 01:05:24.930680 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 23 01:05:24.930689 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 23 01:05:24.930696 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 23 01:05:24.930703 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 23 01:05:24.930710 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 23 01:05:24.930718 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 23 01:05:24.930727 kernel: NX (Execute Disable) protection: active Jan 23 01:05:24.930735 kernel: APIC: Static calls initialized Jan 23 01:05:24.930742 kernel: efi: EFI v2.7 by Microsoft Jan 23 01:05:24.930750 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa1018 RNG=0x3ffd2018 Jan 23 01:05:24.930757 kernel: random: crng init done Jan 23 01:05:24.930765 kernel: secureboot: Secure boot disabled Jan 23 01:05:24.930772 kernel: SMBIOS 3.1.0 present. Jan 23 01:05:24.930780 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 23 01:05:24.930787 kernel: DMI: Memory slots populated: 2/2 Jan 23 01:05:24.930795 kernel: Hypervisor detected: Microsoft Hyper-V Jan 23 01:05:24.930802 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 23 01:05:24.930809 kernel: Hyper-V: Nested features: 0x3e0101 Jan 23 01:05:24.930817 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 23 01:05:24.930824 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 23 01:05:24.930831 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 01:05:24.930839 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 01:05:24.930847 kernel: tsc: Detected 2300.000 MHz processor Jan 23 01:05:24.930854 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:05:24.930863 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:05:24.930871 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 23 01:05:24.930878 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 01:05:24.930886 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:05:24.930894 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 23 01:05:24.930902 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 23 01:05:24.930909 kernel: Using GB pages for direct mapping Jan 23 01:05:24.930917 kernel: ACPI: Early table checksum verification disabled Jan 23 01:05:24.930929 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 23 01:05:24.930938 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:05:24.930947 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:05:24.930954 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 01:05:24.930962 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 23 01:05:24.930969 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:05:24.930977 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:05:24.930985 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:05:24.930993 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 01:05:24.931003 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 01:05:24.931011 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:05:24.931019 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 23 01:05:24.931032 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 23 01:05:24.931040 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 23 01:05:24.931048 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 23 01:05:24.931068 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 23 01:05:24.931077 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 23 01:05:24.931085 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 23 01:05:24.931094 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 23 01:05:24.931102 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 23 01:05:24.931110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 01:05:24.931118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 23 01:05:24.931125 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 23 01:05:24.931133 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 23 01:05:24.931141 kernel: Zone ranges: Jan 23 01:05:24.931149 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:05:24.931157 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 01:05:24.931167 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 01:05:24.931175 kernel: Device empty Jan 23 01:05:24.931183 kernel: Movable zone start for each node Jan 23 01:05:24.931190 kernel: Early memory node ranges Jan 23 01:05:24.931198 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 01:05:24.931205 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 23 01:05:24.931213 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 23 01:05:24.931221 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 23 01:05:24.931229 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 01:05:24.931238 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 23 01:05:24.931246 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:05:24.931255 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 01:05:24.931262 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 23 01:05:24.931270 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 23 01:05:24.931277 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 23 01:05:24.931285 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 23 01:05:24.931293 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:05:24.931301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:05:24.931311 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:05:24.931319 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 23 01:05:24.931327 kernel: TSC deadline timer available Jan 23 01:05:24.931334 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:05:24.931342 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:05:24.931349 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:05:24.931357 kernel: CPU topo: Max. threads per core: 2 Jan 23 01:05:24.931365 kernel: CPU topo: Num. cores per package: 1 Jan 23 01:05:24.931373 kernel: CPU topo: Num. threads per package: 2 Jan 23 01:05:24.931381 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:05:24.931390 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 23 01:05:24.931399 kernel: Booting paravirtualized kernel on Hyper-V Jan 23 01:05:24.931406 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:05:24.931414 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:05:24.931422 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:05:24.931430 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:05:24.931437 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:05:24.931445 kernel: Hyper-V: PV spinlocks enabled Jan 23 01:05:24.931453 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:05:24.931464 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:05:24.931473 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 01:05:24.931481 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:05:24.931488 kernel: Fallback order for Node 0: 0 Jan 23 01:05:24.931496 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 23 01:05:24.931503 kernel: Policy zone: Normal Jan 23 01:05:24.931511 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:05:24.931519 kernel: software IO TLB: area num 2. Jan 23 01:05:24.931529 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:05:24.931537 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:05:24.931545 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:05:24.931553 kernel: Dynamic Preempt: voluntary Jan 23 01:05:24.931561 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:05:24.931569 kernel: rcu: RCU event tracing is enabled. Jan 23 01:05:24.931583 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:05:24.931594 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:05:24.931602 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:05:24.931611 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:05:24.931620 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:05:24.931630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:05:24.931638 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:05:24.931647 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:05:24.931655 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:05:24.931663 kernel: Using NULL legacy PIC Jan 23 01:05:24.931674 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 23 01:05:24.931683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:05:24.931691 kernel: Console: colour dummy device 80x25 Jan 23 01:05:24.931700 kernel: printk: legacy console [tty1] enabled Jan 23 01:05:24.931709 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:05:24.931717 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 23 01:05:24.931725 kernel: ACPI: Core revision 20240827 Jan 23 01:05:24.931733 kernel: Failed to register legacy timer interrupt Jan 23 01:05:24.931742 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:05:24.931752 kernel: x2apic enabled Jan 23 01:05:24.931761 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:05:24.931770 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 01:05:24.931778 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 01:05:24.931786 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 23 01:05:24.931795 kernel: Hyper-V: Using IPI hypercalls Jan 23 01:05:24.931803 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 23 01:05:24.931811 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 23 01:05:24.931820 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 23 01:05:24.931831 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 23 01:05:24.931839 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 23 01:05:24.931848 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 23 01:05:24.931856 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 23 01:05:24.931865 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jan 23 01:05:24.931873 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:05:24.931881 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 01:05:24.931889 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 01:05:24.931897 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:05:24.931905 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:05:24.931916 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:05:24.931925 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 01:05:24.931933 kernel: RETBleed: Vulnerable Jan 23 01:05:24.931942 kernel: Speculative Store Bypass: Vulnerable Jan 23 01:05:24.931950 kernel: active return thunk: its_return_thunk Jan 23 01:05:24.931958 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 01:05:24.931967 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:05:24.931974 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:05:24.931983 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:05:24.931992 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 01:05:24.932001 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 01:05:24.932010 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 01:05:24.932019 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 23 01:05:24.932027 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 23 01:05:24.932035 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 23 01:05:24.932043 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:05:24.932051 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 01:05:24.932069 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 01:05:24.932078 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 01:05:24.932086 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 23 01:05:24.932095 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 23 01:05:24.932104 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 23 01:05:24.932114 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 23 01:05:24.932122 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:05:24.932130 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:05:24.932138 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:05:24.932146 kernel: landlock: Up and running. Jan 23 01:05:24.932155 kernel: SELinux: Initializing. Jan 23 01:05:24.932164 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:05:24.932172 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:05:24.932180 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 23 01:05:24.932189 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 23 01:05:24.932197 kernel: signal: max sigframe size: 11952 Jan 23 01:05:24.932207 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:05:24.932215 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:05:24.932224 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:05:24.932233 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:05:24.932242 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:05:24.932250 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:05:24.932258 kernel: .... node #0, CPUs: #1 Jan 23 01:05:24.932267 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:05:24.932275 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jan 23 01:05:24.932284 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 308180K reserved, 0K cma-reserved) Jan 23 01:05:24.932293 kernel: devtmpfs: initialized Jan 23 01:05:24.932302 kernel: x86/mm: Memory block size: 128MB Jan 23 01:05:24.932311 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 23 01:05:24.932320 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:05:24.932329 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:05:24.932337 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:05:24.932345 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:05:24.932353 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:05:24.932362 kernel: audit: type=2000 audit(1769130321.090:1): state=initialized audit_enabled=0 res=1 Jan 23 01:05:24.932370 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:05:24.932378 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:05:24.932386 kernel: cpuidle: using governor menu Jan 23 01:05:24.932394 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:05:24.932403 kernel: dca service started, version 1.12.1 Jan 23 01:05:24.932411 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 23 01:05:24.932420 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 23 01:05:24.932429 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:05:24.932438 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:05:24.932446 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:05:24.932454 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:05:24.932463 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:05:24.932472 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:05:24.932480 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:05:24.932489 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:05:24.932498 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:05:24.932509 kernel: ACPI: Interpreter enabled Jan 23 01:05:24.932517 kernel: ACPI: PM: (supports S0 S5) Jan 23 01:05:24.932524 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:05:24.932534 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:05:24.932542 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 23 01:05:24.932550 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 23 01:05:24.932559 kernel: iommu: Default domain type: Translated Jan 23 01:05:24.932568 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:05:24.932577 kernel: efivars: Registered efivars operations Jan 23 01:05:24.932585 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:05:24.932593 kernel: PCI: System does not support PCI Jan 23 01:05:24.932601 kernel: vgaarb: loaded Jan 23 01:05:24.932610 kernel: clocksource: Switched to clocksource tsc-early Jan 23 01:05:24.932618 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:05:24.932626 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:05:24.932635 kernel: pnp: PnP ACPI init Jan 23 01:05:24.932643 kernel: pnp: PnP ACPI: found 3 devices Jan 23 01:05:24.932652 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:05:24.932660 kernel: NET: Registered PF_INET protocol family Jan 23 01:05:24.932669 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 01:05:24.932674 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 23 01:05:24.932680 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:05:24.932685 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:05:24.932690 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 01:05:24.932696 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 23 01:05:24.932701 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 01:05:24.932706 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 01:05:24.932713 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:05:24.932718 kernel: NET: Registered PF_XDP protocol family Jan 23 01:05:24.932723 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:05:24.932728 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 01:05:24.932734 kernel: software IO TLB: mapped [mem 0x000000003a9b9000-0x000000003e9b9000] (64MB) Jan 23 01:05:24.932739 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 23 01:05:24.932744 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 23 01:05:24.932749 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 23 01:05:24.932755 kernel: clocksource: Switched to clocksource tsc Jan 23 01:05:24.932761 kernel: Initialise system trusted keyrings Jan 23 01:05:24.932766 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 23 01:05:24.932771 kernel: Key type asymmetric registered Jan 23 01:05:24.932777 kernel: Asymmetric key parser 'x509' registered Jan 23 01:05:24.932782 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:05:24.932787 kernel: io scheduler mq-deadline registered Jan 23 01:05:24.932793 kernel: io scheduler kyber registered Jan 23 01:05:24.932798 kernel: io scheduler bfq registered Jan 23 01:05:24.932803 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:05:24.932809 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:05:24.932815 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:05:24.932820 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 01:05:24.932825 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:05:24.932831 kernel: i8042: PNP: No PS/2 controller found. Jan 23 01:05:24.932935 kernel: rtc_cmos 00:02: registered as rtc0 Jan 23 01:05:24.932989 kernel: rtc_cmos 00:02: setting system clock to 2026-01-23T01:05:24 UTC (1769130324) Jan 23 01:05:24.933036 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 23 01:05:24.933044 kernel: intel_pstate: Intel P-state driver initializing Jan 23 01:05:24.933049 kernel: efifb: probing for efifb Jan 23 01:05:24.933126 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 01:05:24.933135 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 01:05:24.933144 kernel: efifb: scrolling: redraw Jan 23 01:05:24.933152 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 01:05:24.933161 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 01:05:24.933169 kernel: fb0: EFI VGA frame buffer device Jan 23 01:05:24.933178 kernel: pstore: Using crash dump compression: deflate Jan 23 01:05:24.933188 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 01:05:24.933196 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:05:24.933205 kernel: Segment Routing with IPv6 Jan 23 01:05:24.933213 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:05:24.933222 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:05:24.933230 kernel: Key type dns_resolver registered Jan 23 01:05:24.933238 kernel: IPI shorthand broadcast: enabled Jan 23 01:05:24.933247 kernel: sched_clock: Marking stable (2789004767, 92487090)->(3169058160, -287566303) Jan 23 01:05:24.933256 kernel: registered taskstats version 1 Jan 23 01:05:24.933266 kernel: Loading compiled-in X.509 certificates Jan 23 01:05:24.933274 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:05:24.933282 kernel: Demotion targets for Node 0: null Jan 23 01:05:24.933290 kernel: Key type .fscrypt registered Jan 23 01:05:24.933298 kernel: Key type fscrypt-provisioning registered Jan 23 01:05:24.933307 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:05:24.933316 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:05:24.933324 kernel: ima: No architecture policies found Jan 23 01:05:24.933332 kernel: clk: Disabling unused clocks Jan 23 01:05:24.933342 kernel: Warning: unable to open an initial console. Jan 23 01:05:24.933350 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:05:24.933359 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:05:24.933368 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:05:24.933377 kernel: Run /init as init process Jan 23 01:05:24.933386 kernel: with arguments: Jan 23 01:05:24.933394 kernel: /init Jan 23 01:05:24.933401 kernel: with environment: Jan 23 01:05:24.933409 kernel: HOME=/ Jan 23 01:05:24.933418 kernel: TERM=linux Jan 23 01:05:24.933429 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:05:24.933441 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:05:24.933451 systemd[1]: Detected virtualization microsoft. Jan 23 01:05:24.933459 systemd[1]: Detected architecture x86-64. Jan 23 01:05:24.933468 systemd[1]: Running in initrd. Jan 23 01:05:24.933477 systemd[1]: No hostname configured, using default hostname. Jan 23 01:05:24.933489 systemd[1]: Hostname set to . Jan 23 01:05:24.933498 systemd[1]: Initializing machine ID from random generator. Jan 23 01:05:24.933507 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:05:24.933516 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:05:24.933525 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:05:24.933535 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:05:24.933544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:05:24.933553 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:05:24.933564 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:05:24.933575 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:05:24.933584 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:05:24.933593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:05:24.933602 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:05:24.933611 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:05:24.933619 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:05:24.933630 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:05:24.933639 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:05:24.933648 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:05:24.933657 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:05:24.933666 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:05:24.933675 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:05:24.933684 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:05:24.933694 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:05:24.933703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:05:24.933713 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:05:24.933723 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:05:24.933732 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:05:24.933741 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:05:24.933750 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:05:24.933759 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:05:24.933768 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:05:24.933778 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:05:24.933797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:05:24.933824 systemd-journald[186]: Collecting audit messages is disabled. Jan 23 01:05:24.933851 systemd-journald[186]: Journal started Jan 23 01:05:24.933874 systemd-journald[186]: Runtime Journal (/run/log/journal/3b477993b9f648a4961ba24c0fa82281) is 8M, max 158.6M, 150.6M free. Jan 23 01:05:24.937071 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:05:24.938366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:05:24.942569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:05:24.949966 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:05:24.954005 systemd-modules-load[187]: Inserted module 'overlay' Jan 23 01:05:24.956148 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:05:24.964166 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:05:24.972987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:05:24.980219 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:05:24.991746 systemd-tmpfiles[200]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:05:24.994530 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:05:24.994966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:05:24.999194 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:05:25.003898 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:05:25.012538 systemd-modules-load[187]: Inserted module 'br_netfilter' Jan 23 01:05:25.014649 kernel: Bridge firewalling registered Jan 23 01:05:25.014740 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:05:25.017354 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:05:25.022689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:05:25.026498 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:05:25.030201 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:05:25.047889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:05:25.052036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:05:25.056803 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:05:25.094119 systemd-resolved[233]: Positive Trust Anchors: Jan 23 01:05:25.094133 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:05:25.094170 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:05:25.115177 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 23 01:05:25.118183 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:05:25.123220 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:05:25.144074 kernel: SCSI subsystem initialized Jan 23 01:05:25.152072 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:05:25.161080 kernel: iscsi: registered transport (tcp) Jan 23 01:05:25.178374 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:05:25.178412 kernel: QLogic iSCSI HBA Driver Jan 23 01:05:25.191301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:05:25.216926 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:05:25.223634 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:05:25.251633 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:05:25.254167 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:05:25.298071 kernel: raid6: avx512x4 gen() 43935 MB/s Jan 23 01:05:25.315068 kernel: raid6: avx512x2 gen() 43913 MB/s Jan 23 01:05:25.333066 kernel: raid6: avx512x1 gen() 28060 MB/s Jan 23 01:05:25.351067 kernel: raid6: avx2x4 gen() 38535 MB/s Jan 23 01:05:25.368067 kernel: raid6: avx2x2 gen() 37668 MB/s Jan 23 01:05:25.385701 kernel: raid6: avx2x1 gen() 31133 MB/s Jan 23 01:05:25.385775 kernel: raid6: using algorithm avx512x4 gen() 43935 MB/s Jan 23 01:05:25.405417 kernel: raid6: .... xor() 7412 MB/s, rmw enabled Jan 23 01:05:25.405435 kernel: raid6: using avx512x2 recovery algorithm Jan 23 01:05:25.422074 kernel: xor: automatically using best checksumming function avx Jan 23 01:05:25.539073 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:05:25.543640 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:05:25.547071 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:05:25.571333 systemd-udevd[435]: Using default interface naming scheme 'v255'. Jan 23 01:05:25.575648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:05:25.582402 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:05:25.596588 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jan 23 01:05:25.613532 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:05:25.615179 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:05:25.643533 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:05:25.652173 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:05:25.707440 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:05:25.707476 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 01:05:25.709833 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:05:25.709974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:05:25.713722 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:05:25.725278 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:05:25.731177 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 01:05:25.734169 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:05:25.735752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:05:25.740119 kernel: AES CTR mode by8 optimization enabled Jan 23 01:05:25.740137 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 01:05:25.742297 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 01:05:25.746544 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 01:05:25.755828 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:05:25.768136 kernel: hv_vmbus: registering driver hv_pci Jan 23 01:05:25.774200 kernel: PTP clock support registered Jan 23 01:05:25.782159 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 01:05:25.782194 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 23 01:05:25.786940 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 01:05:25.786975 kernel: scsi host0: storvsc_host_t Jan 23 01:05:25.791091 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 23 01:05:25.791341 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 01:05:25.798508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:05:25.395633 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 01:05:25.398120 kernel: hv_vmbus: registering driver hv_utils Jan 23 01:05:25.398502 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 01:05:25.398511 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 01:05:25.398518 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 01:05:25.398526 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 01:05:25.398535 systemd-journald[186]: Time jumped backwards, rotating. Jan 23 01:05:25.398569 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 23 01:05:25.393784 systemd-resolved[233]: Clock change detected. Flushing caches. Jan 23 01:05:25.410769 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 01:05:25.410888 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 23 01:05:25.410907 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 23 01:05:25.425908 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 01:05:25.429719 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 01:05:25.429760 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 01:05:25.440767 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d46fb96 (unnamed net_device) (uninitialized): VF slot 1 added Jan 23 01:05:25.449138 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 01:05:25.449329 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 01:05:25.449344 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 01:05:25.451362 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 23 01:05:25.457515 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 01:05:25.472535 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 23 01:05:25.472710 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 23 01:05:25.476208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#237 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 01:05:25.492196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#210 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 01:05:25.630155 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 01:05:25.634143 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:05:26.233245 kernel: nvme nvme0: using unchecked data buffer Jan 23 01:05:26.467984 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 23 01:05:26.468203 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 23 01:05:26.470733 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 23 01:05:26.472205 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 01:05:26.477293 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 23 01:05:26.480204 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 23 01:05:26.484194 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 23 01:05:26.486293 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 23 01:05:26.500170 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 01:05:26.500352 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 23 01:05:26.505249 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 23 01:05:26.510502 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 23 01:05:26.517161 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 23 01:05:26.520572 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d46fb96 eth0: VF registering: eth1 Jan 23 01:05:26.520755 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 23 01:05:26.525151 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 23 01:05:26.602794 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 01:05:26.661236 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 23 01:05:26.702926 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 23 01:05:26.746856 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 01:05:26.750960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 01:05:26.755388 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:05:26.758943 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:05:26.760609 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:05:26.760637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:05:26.761535 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:05:26.764238 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:05:26.787824 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:05:26.790834 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:05:26.798147 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:05:27.804230 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:05:27.805237 disk-uuid[654]: The operation has completed successfully. Jan 23 01:05:27.852569 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:05:27.852659 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:05:27.887294 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:05:27.904138 sh[694]: Success Jan 23 01:05:27.953269 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:05:27.953307 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:05:27.954644 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:05:27.964188 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 01:05:28.486459 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:05:28.491242 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:05:28.506396 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:05:28.517144 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (707) Jan 23 01:05:28.519494 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:05:28.519528 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:05:29.501456 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 01:05:29.501546 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:05:29.502620 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:05:29.585975 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:05:29.589636 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:05:29.589901 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:05:29.592257 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:05:29.594228 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:05:29.628283 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (739) Jan 23 01:05:29.628319 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:05:29.630764 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:05:29.672025 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:05:29.681521 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:05:29.681545 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 01:05:29.681556 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:05:29.681564 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:05:29.682040 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:05:29.683791 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:05:29.696234 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:05:29.714036 systemd-networkd[874]: lo: Link UP Jan 23 01:05:29.714044 systemd-networkd[874]: lo: Gained carrier Jan 23 01:05:29.730593 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 01:05:29.730815 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 01:05:29.714921 systemd-networkd[874]: Enumeration completed Jan 23 01:05:29.715564 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:05:29.736872 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d46fb96 eth0: Data path switched to VF: enP30832s1 Jan 23 01:05:29.716986 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:05:29.716990 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:05:29.728160 systemd[1]: Reached target network.target - Network. Jan 23 01:05:29.735765 systemd-networkd[874]: enP30832s1: Link UP Jan 23 01:05:29.735857 systemd-networkd[874]: eth0: Link UP Jan 23 01:05:29.736252 systemd-networkd[874]: eth0: Gained carrier Jan 23 01:05:29.736263 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:05:29.739394 systemd-networkd[874]: enP30832s1: Gained carrier Jan 23 01:05:29.751173 systemd-networkd[874]: eth0: DHCPv4 address 10.200.8.21/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 01:05:31.249269 systemd-networkd[874]: eth0: Gained IPv6LL Jan 23 01:05:32.023736 ignition[877]: Ignition 2.22.0 Jan 23 01:05:32.023747 ignition[877]: Stage: fetch-offline Jan 23 01:05:32.025609 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:05:32.023859 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:05:32.023865 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:05:32.033267 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:05:32.023953 ignition[877]: parsed url from cmdline: "" Jan 23 01:05:32.023956 ignition[877]: no config URL provided Jan 23 01:05:32.023961 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:05:32.023967 ignition[877]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:05:32.023972 ignition[877]: failed to fetch config: resource requires networking Jan 23 01:05:32.024233 ignition[877]: Ignition finished successfully Jan 23 01:05:32.060520 ignition[886]: Ignition 2.22.0 Jan 23 01:05:32.060539 ignition[886]: Stage: fetch Jan 23 01:05:32.060746 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:05:32.060754 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:05:32.060830 ignition[886]: parsed url from cmdline: "" Jan 23 01:05:32.060833 ignition[886]: no config URL provided Jan 23 01:05:32.060838 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:05:32.060844 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:05:32.060865 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 01:05:32.128413 ignition[886]: GET result: OK Jan 23 01:05:32.128479 ignition[886]: config has been read from IMDS userdata Jan 23 01:05:32.129348 ignition[886]: parsing config with SHA512: fccddde7713f19cb5de540bff995201284108563ca25b075b33d39d5759c0623b24ba76275a7fc218a9d7eda1b67e73063ed44e37c0743b32fdfc1035e4cfb32 Jan 23 01:05:32.135164 unknown[886]: fetched base config from "system" Jan 23 01:05:32.135172 unknown[886]: fetched base config from "system" Jan 23 01:05:32.135456 ignition[886]: fetch: fetch complete Jan 23 01:05:32.135177 unknown[886]: fetched user config from "azure" Jan 23 01:05:32.135460 ignition[886]: fetch: fetch passed Jan 23 01:05:32.137882 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:05:32.135496 ignition[886]: Ignition finished successfully Jan 23 01:05:32.144117 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:05:32.174601 ignition[893]: Ignition 2.22.0 Jan 23 01:05:32.174617 ignition[893]: Stage: kargs Jan 23 01:05:32.175290 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:05:32.178480 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:05:32.175301 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:05:32.176017 ignition[893]: kargs: kargs passed Jan 23 01:05:32.183631 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:05:32.176048 ignition[893]: Ignition finished successfully Jan 23 01:05:32.209219 ignition[900]: Ignition 2.22.0 Jan 23 01:05:32.209227 ignition[900]: Stage: disks Jan 23 01:05:32.209589 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:05:32.209597 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:05:32.210307 ignition[900]: disks: disks passed Jan 23 01:05:32.214123 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:05:32.210335 ignition[900]: Ignition finished successfully Jan 23 01:05:32.217236 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:05:32.221693 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:05:32.223620 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:05:32.231177 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:05:32.233742 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:05:32.236787 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:05:32.306400 systemd-fsck[909]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 23 01:05:32.310694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:05:32.316843 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:05:32.835143 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:05:32.835973 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:05:32.836706 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:05:32.869691 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:05:32.875208 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:05:32.876261 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 01:05:32.877357 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:05:32.877620 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:05:32.888687 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:05:32.893548 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:05:32.907257 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (918) Jan 23 01:05:32.907293 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:05:32.908870 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:05:32.913771 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:05:32.913812 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 01:05:32.915504 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:05:32.916567 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:05:33.955086 coreos-metadata[920]: Jan 23 01:05:33.955 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 01:05:33.958581 coreos-metadata[920]: Jan 23 01:05:33.958 INFO Fetch successful Jan 23 01:05:33.962189 coreos-metadata[920]: Jan 23 01:05:33.959 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 01:05:33.971726 coreos-metadata[920]: Jan 23 01:05:33.971 INFO Fetch successful Jan 23 01:05:34.000541 coreos-metadata[920]: Jan 23 01:05:34.000 INFO wrote hostname ci-4459.2.2-n-059e17308a to /sysroot/etc/hostname Jan 23 01:05:34.003685 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 01:05:34.329395 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:05:34.436939 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:05:34.522813 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:05:34.526399 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:05:35.906722 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:05:35.910224 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:05:35.917246 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:05:35.925374 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:05:35.928521 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:05:35.950460 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:05:35.960109 ignition[1036]: INFO : Ignition 2.22.0 Jan 23 01:05:35.960109 ignition[1036]: INFO : Stage: mount Jan 23 01:05:35.966189 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:05:35.966189 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:05:35.966189 ignition[1036]: INFO : mount: mount passed Jan 23 01:05:35.966189 ignition[1036]: INFO : Ignition finished successfully Jan 23 01:05:35.963079 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:05:35.966471 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:05:35.984773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:05:36.007250 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1049) Jan 23 01:05:36.007282 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:05:36.010205 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:05:36.014617 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:05:36.014647 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 01:05:36.015839 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:05:36.017600 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:05:36.043406 ignition[1066]: INFO : Ignition 2.22.0 Jan 23 01:05:36.043406 ignition[1066]: INFO : Stage: files Jan 23 01:05:36.045533 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:05:36.045533 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:05:36.045533 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:05:36.055187 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:05:36.055187 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:05:36.157737 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:05:36.159609 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:05:36.159609 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:05:36.157979 unknown[1066]: wrote ssh authorized keys file for user: core Jan 23 01:05:36.198992 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:05:36.201381 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 01:05:36.262278 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:05:36.299400 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:05:36.304231 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:05:36.304231 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:05:36.304231 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:05:36.304231 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:05:36.304231 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:05:36.304231 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:05:36.304231 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:05:36.304231 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:05:36.327521 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:05:36.327521 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:05:36.327521 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:05:36.327521 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:05:36.327521 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:05:36.327521 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 01:05:36.866837 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 01:05:38.181762 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:05:38.181762 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 01:05:38.231055 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:05:38.236875 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:05:38.236875 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 01:05:38.242841 ignition[1066]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:05:38.242841 ignition[1066]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:05:38.242841 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:05:38.242841 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:05:38.242841 ignition[1066]: INFO : files: files passed Jan 23 01:05:38.242841 ignition[1066]: INFO : Ignition finished successfully Jan 23 01:05:38.246001 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:05:38.255024 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:05:38.265193 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:05:38.270721 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:05:38.275310 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:05:38.284008 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:05:38.284008 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:05:38.290029 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:05:38.292676 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:05:38.293820 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:05:38.296191 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:05:38.320350 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:05:38.320429 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:05:38.326425 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:05:38.328091 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:05:38.331647 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:05:38.333832 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:05:38.352988 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:05:38.355233 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:05:38.370648 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:05:38.374325 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:05:38.380879 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:05:38.383300 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:05:38.383627 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:05:38.389628 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:05:38.392737 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:05:38.395216 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:05:38.396615 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:05:38.403856 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:05:38.407101 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:05:38.411624 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:05:38.414245 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:05:38.417975 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:05:38.421318 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:05:38.425267 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:05:38.429248 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:05:38.429370 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:05:38.430097 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:05:38.432880 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:05:38.434369 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:05:38.434568 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:05:38.437049 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:05:38.437158 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:05:38.453239 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:05:38.454183 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:05:38.457952 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:05:38.458713 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:05:38.465275 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 01:05:38.465376 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 01:05:38.472326 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:05:38.472548 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:05:38.472668 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:05:38.474294 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:05:38.474582 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:05:38.474698 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:05:38.476835 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:05:38.476953 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:05:38.486031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:05:38.497223 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:05:38.518609 ignition[1120]: INFO : Ignition 2.22.0 Jan 23 01:05:38.520286 ignition[1120]: INFO : Stage: umount Jan 23 01:05:38.520286 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:05:38.520286 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:05:38.531324 ignition[1120]: INFO : umount: umount passed Jan 23 01:05:38.531324 ignition[1120]: INFO : Ignition finished successfully Jan 23 01:05:38.522363 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:05:38.522448 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:05:38.534791 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:05:38.535122 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:05:38.535222 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:05:38.539536 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:05:38.539617 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:05:38.540007 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:05:38.540036 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:05:38.540255 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:05:38.540281 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:05:38.540498 systemd[1]: Stopped target network.target - Network. Jan 23 01:05:38.540526 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:05:38.540555 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:05:38.540770 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:05:38.540791 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:05:38.542584 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:05:38.542613 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:05:38.542631 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:05:38.542855 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:05:38.542884 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:05:38.543105 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:05:38.543135 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:05:38.543357 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:05:38.543388 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:05:38.543619 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:05:38.543645 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:05:38.543672 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:05:38.543699 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:05:38.543968 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:05:38.544244 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:05:38.548651 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:05:38.548748 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:05:38.603502 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:05:38.604750 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:05:38.604841 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:05:38.612497 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:05:38.612841 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:05:38.615964 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:05:38.616595 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:05:38.621466 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:05:38.630558 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:05:38.630615 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:05:38.635195 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:05:38.635236 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:05:38.641393 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:05:38.641436 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:05:38.645710 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:05:38.645801 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:05:38.650300 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:05:38.654533 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:05:38.654586 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:05:38.666454 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d46fb96 eth0: Data path switched from VF: enP30832s1 Jan 23 01:05:38.666663 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 01:05:38.668505 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:05:38.668599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:05:38.673239 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:05:38.674489 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:05:38.676561 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:05:38.676595 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:05:38.681209 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:05:38.681238 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:05:38.685185 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:05:38.685224 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:05:38.689434 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:05:38.689472 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:05:38.694223 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:05:38.694260 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:05:38.699788 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:05:38.703156 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:05:38.703209 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:05:38.708501 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:05:38.708547 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:05:38.723266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:05:38.723311 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:05:38.730021 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:05:38.730060 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:05:38.730084 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:05:38.739240 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:05:38.739310 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:05:38.744332 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:05:38.745026 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:05:38.758425 systemd[1]: Switching root. Jan 23 01:05:38.803236 systemd-journald[186]: Journal stopped Jan 23 01:05:44.313362 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Jan 23 01:05:44.313398 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:05:44.313414 kernel: SELinux: policy capability open_perms=1 Jan 23 01:05:44.313424 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:05:44.313432 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:05:44.313441 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:05:44.313451 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:05:44.313460 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:05:44.313470 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:05:44.313480 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:05:44.313489 kernel: audit: type=1403 audit(1769130339.300:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:05:44.313500 systemd[1]: Successfully loaded SELinux policy in 78.169ms. Jan 23 01:05:44.313510 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.088ms. Jan 23 01:05:44.313527 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:05:44.313538 systemd[1]: Detected virtualization microsoft. Jan 23 01:05:44.313547 systemd[1]: Detected architecture x86-64. Jan 23 01:05:44.313556 systemd[1]: Detected first boot. Jan 23 01:05:44.313565 systemd[1]: Hostname set to . Jan 23 01:05:44.313575 systemd[1]: Initializing machine ID from random generator. Jan 23 01:05:44.313583 zram_generator::config[1162]: No configuration found. Jan 23 01:05:44.313596 kernel: Guest personality initialized and is inactive Jan 23 01:05:44.313604 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 23 01:05:44.313612 kernel: Initialized host personality Jan 23 01:05:44.313620 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:05:44.313629 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:05:44.313640 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:05:44.313650 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:05:44.313660 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:05:44.313672 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:05:44.313680 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:05:44.313690 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:05:44.313698 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:05:44.313706 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:05:44.313715 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:05:44.313724 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:05:44.313734 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:05:44.313743 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:05:44.313751 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:05:44.313759 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:05:44.313768 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:05:44.313780 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:05:44.313790 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:05:44.313800 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:05:44.313810 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:05:44.313818 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:05:44.313831 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:05:44.313839 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:05:44.313848 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:05:44.313857 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:05:44.313866 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:05:44.313877 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:05:44.313885 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:05:44.313894 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:05:44.313902 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:05:44.313911 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:05:44.313920 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:05:44.313932 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:05:44.313941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:05:44.313949 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:05:44.313957 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:05:44.313965 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:05:44.313974 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:05:44.313984 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:05:44.313995 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:05:44.314004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:05:44.314013 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:05:44.314022 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:05:44.314030 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:05:44.314039 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:05:44.314048 systemd[1]: Reached target machines.target - Containers. Jan 23 01:05:44.314057 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:05:44.314066 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:05:44.314076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:05:44.314084 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:05:44.314093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:05:44.314101 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:05:44.314113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:05:44.314122 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:05:44.314142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:05:44.314151 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:05:44.314161 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:05:44.314170 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:05:44.314178 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:05:44.314188 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:05:44.314198 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:05:44.314207 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:05:44.314216 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:05:44.314225 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:05:44.314235 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:05:44.314243 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:05:44.314252 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:05:44.314260 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:05:44.314270 systemd[1]: Stopped verity-setup.service. Jan 23 01:05:44.314279 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:05:44.314312 systemd-journald[1240]: Collecting audit messages is disabled. Jan 23 01:05:44.314336 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:05:44.314345 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:05:44.314355 systemd-journald[1240]: Journal started Jan 23 01:05:44.314375 systemd-journald[1240]: Runtime Journal (/run/log/journal/0cab7ada7437480f8370c793728476f4) is 8M, max 158.6M, 150.6M free. Jan 23 01:05:43.851536 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:05:43.863506 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 01:05:43.863880 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:05:44.317362 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:05:44.319307 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:05:44.321387 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:05:44.324307 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:05:44.328192 kernel: loop: module loaded Jan 23 01:05:44.327421 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:05:44.330343 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:05:44.333084 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:05:44.333250 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:05:44.335411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:05:44.335599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:05:44.338146 kernel: fuse: init (API version 7.41) Jan 23 01:05:44.338191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:05:44.338378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:05:44.341311 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:05:44.343026 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:05:44.343122 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:05:44.346554 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:05:44.346690 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:05:44.350421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:05:44.353350 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:05:44.356344 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:05:44.364243 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:05:44.366536 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:05:44.370251 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:05:44.374232 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:05:44.374261 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:05:44.376516 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:05:44.382813 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:05:44.386257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:05:44.457277 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:05:44.470236 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:05:44.472529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:05:44.474049 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:05:44.476033 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:05:44.480227 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:05:44.485263 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:05:44.489279 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:05:44.493027 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:05:44.498062 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:05:44.500734 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:05:44.509148 kernel: ACPI: bus type drm_connector registered Jan 23 01:05:44.509624 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:05:44.509778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:05:44.513037 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:05:44.525701 systemd-journald[1240]: Time spent on flushing to /var/log/journal/0cab7ada7437480f8370c793728476f4 is 76.854ms for 992 entries. Jan 23 01:05:44.525701 systemd-journald[1240]: System Journal (/var/log/journal/0cab7ada7437480f8370c793728476f4) is 11.8M, max 2.6G, 2.6G free. Jan 23 01:05:44.624500 systemd-journald[1240]: Received client request to flush runtime journal. Jan 23 01:05:44.624544 systemd-journald[1240]: /var/log/journal/0cab7ada7437480f8370c793728476f4/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 23 01:05:44.624561 systemd-journald[1240]: Rotating system journal. Jan 23 01:05:44.624574 kernel: loop0: detected capacity change from 0 to 219144 Jan 23 01:05:44.517262 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:05:44.521332 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:05:44.544824 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:05:44.574325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:05:44.625182 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:05:44.626749 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:05:44.783154 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:05:44.839143 kernel: loop1: detected capacity change from 0 to 27936 Jan 23 01:05:44.864933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:05:44.886804 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:05:44.890001 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:05:44.923001 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 23 01:05:44.923015 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 23 01:05:44.925070 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:05:45.601682 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:05:45.605441 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:05:45.634949 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 23 01:05:45.801164 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 01:05:45.910152 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 01:05:46.020152 kernel: loop4: detected capacity change from 0 to 219144 Jan 23 01:05:46.031151 kernel: loop5: detected capacity change from 0 to 27936 Jan 23 01:05:46.041146 kernel: loop6: detected capacity change from 0 to 110984 Jan 23 01:05:46.050148 kernel: loop7: detected capacity change from 0 to 128560 Jan 23 01:05:46.056853 (sd-merge)[1333]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 01:05:46.057247 (sd-merge)[1333]: Merged extensions into '/usr'. Jan 23 01:05:46.060354 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:05:46.060368 systemd[1]: Reloading... Jan 23 01:05:46.113158 zram_generator::config[1355]: No configuration found. Jan 23 01:05:46.327165 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:05:46.338171 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 01:05:46.350167 kernel: hv_vmbus: registering driver hv_balloon Jan 23 01:05:46.352143 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 01:05:46.371185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#75 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 01:05:46.376637 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 01:05:46.380143 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 01:05:46.382413 kernel: Console: switching to colour dummy device 80x25 Jan 23 01:05:46.389716 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 01:05:46.387113 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:05:46.387335 systemd[1]: Reloading finished in 326 ms. Jan 23 01:05:46.399537 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:05:46.405560 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:05:46.419541 systemd[1]: Starting ensure-sysext.service... Jan 23 01:05:46.423601 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:05:46.427376 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:05:46.450231 systemd[1]: Reload requested from client PID 1476 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:05:46.450242 systemd[1]: Reloading... Jan 23 01:05:46.472859 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:05:46.477060 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:05:46.478342 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:05:46.478529 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:05:46.480453 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:05:46.480658 systemd-tmpfiles[1478]: ACLs are not supported, ignoring. Jan 23 01:05:46.480697 systemd-tmpfiles[1478]: ACLs are not supported, ignoring. Jan 23 01:05:46.489961 systemd-tmpfiles[1478]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:05:46.489972 systemd-tmpfiles[1478]: Skipping /boot Jan 23 01:05:46.498199 systemd-tmpfiles[1478]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:05:46.498207 systemd-tmpfiles[1478]: Skipping /boot Jan 23 01:05:46.557170 zram_generator::config[1512]: No configuration found. Jan 23 01:05:46.780151 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 23 01:05:46.796935 systemd[1]: Reloading finished in 346 ms. Jan 23 01:05:46.824142 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:05:46.849068 systemd[1]: Finished ensure-sysext.service. Jan 23 01:05:46.862790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 01:05:46.866047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:05:46.866784 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:05:46.879706 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:05:46.882374 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:05:46.883102 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:05:46.887917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:05:46.893390 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:05:46.896174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:05:46.898563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:05:46.901314 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:05:46.904443 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:05:46.907291 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:05:46.911583 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:05:46.913437 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:05:46.918901 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:05:46.922775 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:05:46.930709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:05:46.933112 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:05:46.933768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:05:46.933938 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:05:46.937697 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:05:46.938234 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:05:46.940578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:05:46.940737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:05:46.944061 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:05:46.944234 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:05:46.951830 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:05:46.960208 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:05:46.965616 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:05:46.965665 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:05:46.976159 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:05:46.987546 augenrules[1626]: No rules Jan 23 01:05:46.988390 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:05:46.988665 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:05:47.004782 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:05:47.067258 systemd-resolved[1599]: Positive Trust Anchors: Jan 23 01:05:47.067273 systemd-resolved[1599]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:05:47.067305 systemd-resolved[1599]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:05:47.074606 systemd-networkd[1477]: lo: Link UP Jan 23 01:05:47.074614 systemd-networkd[1477]: lo: Gained carrier Jan 23 01:05:47.075596 systemd-networkd[1477]: Enumeration completed Jan 23 01:05:47.075699 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:05:47.075901 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:05:47.075905 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:05:47.078849 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 01:05:47.079384 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:05:47.086019 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 01:05:47.086226 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d46fb96 eth0: Data path switched to VF: enP30832s1 Jan 23 01:05:47.082244 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:05:47.086623 systemd-networkd[1477]: enP30832s1: Link UP Jan 23 01:05:47.086700 systemd-networkd[1477]: eth0: Link UP Jan 23 01:05:47.086707 systemd-networkd[1477]: eth0: Gained carrier Jan 23 01:05:47.086723 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:05:47.088826 systemd-resolved[1599]: Using system hostname 'ci-4459.2.2-n-059e17308a'. Jan 23 01:05:47.089812 systemd-networkd[1477]: enP30832s1: Gained carrier Jan 23 01:05:47.091432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:05:47.093702 systemd[1]: Reached target network.target - Network. Jan 23 01:05:47.094495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:05:47.105461 systemd-networkd[1477]: eth0: DHCPv4 address 10.200.8.21/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 01:05:47.112221 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:05:47.318500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:05:48.465283 systemd-networkd[1477]: eth0: Gained IPv6LL Jan 23 01:05:48.467349 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:05:48.469289 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:05:48.512606 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:05:48.514260 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:05:56.534153 ldconfig[1296]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:05:56.543734 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:05:56.546336 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:05:56.564700 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:05:56.568329 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:05:56.571262 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:05:56.574190 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:05:56.577184 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:05:56.578930 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:05:56.586363 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:05:56.587991 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:05:56.591166 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:05:56.591195 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:05:56.594165 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:05:56.595720 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:05:56.598174 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:05:56.600912 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:05:56.602552 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:05:56.604225 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:05:56.608456 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:05:56.612410 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:05:56.615624 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:05:56.617843 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:05:56.619234 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:05:56.622207 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:05:56.622235 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:05:56.644392 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 01:05:56.646914 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:05:56.653236 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:05:56.656861 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:05:56.663300 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:05:56.669344 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:05:56.672937 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:05:56.674964 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:05:56.676922 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:05:56.678925 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 23 01:05:56.680977 jq[1658]: false Jan 23 01:05:56.682202 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 01:05:56.684494 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 01:05:56.685347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:05:56.689626 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:05:56.697017 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:05:56.701802 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:05:56.709057 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:05:56.714270 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:05:56.720304 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:05:56.723521 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:05:56.723930 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:05:56.726308 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:05:56.731006 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:05:56.739240 KVP[1664]: KVP starting; pid is:1664 Jan 23 01:05:56.740354 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:05:56.745475 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:05:56.750188 kernel: hv_utils: KVP IC version 4.0 Jan 23 01:05:56.749483 KVP[1664]: KVP LIC Version: 3.1 Jan 23 01:05:56.754291 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:05:56.759664 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:05:56.763558 jq[1678]: true Jan 23 01:05:56.765455 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:05:56.768670 google_oslogin_nss_cache[1660]: oslogin_cache_refresh[1660]: Refreshing passwd entry cache Jan 23 01:05:56.771167 oslogin_cache_refresh[1660]: Refreshing passwd entry cache Jan 23 01:05:56.778510 extend-filesystems[1659]: Found /dev/nvme0n1p6 Jan 23 01:05:56.789880 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:05:56.790412 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:05:56.796895 extend-filesystems[1659]: Found /dev/nvme0n1p9 Jan 23 01:05:56.796735 chronyd[1653]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 01:05:56.801221 google_oslogin_nss_cache[1660]: oslogin_cache_refresh[1660]: Failure getting users, quitting Jan 23 01:05:56.801221 google_oslogin_nss_cache[1660]: oslogin_cache_refresh[1660]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:05:56.801221 google_oslogin_nss_cache[1660]: oslogin_cache_refresh[1660]: Refreshing group entry cache Jan 23 01:05:56.800503 oslogin_cache_refresh[1660]: Failure getting users, quitting Jan 23 01:05:56.800517 oslogin_cache_refresh[1660]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:05:56.800552 oslogin_cache_refresh[1660]: Refreshing group entry cache Jan 23 01:05:56.803316 extend-filesystems[1659]: Checking size of /dev/nvme0n1p9 Jan 23 01:05:56.806316 (ntainerd)[1689]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:05:56.812312 jq[1688]: true Jan 23 01:05:56.813039 google_oslogin_nss_cache[1660]: oslogin_cache_refresh[1660]: Failure getting groups, quitting Jan 23 01:05:56.813082 google_oslogin_nss_cache[1660]: oslogin_cache_refresh[1660]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:05:56.813038 oslogin_cache_refresh[1660]: Failure getting groups, quitting Jan 23 01:05:56.813047 oslogin_cache_refresh[1660]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:05:56.815642 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:05:56.815882 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:05:56.824238 update_engine[1677]: I20260123 01:05:56.823913 1677 main.cc:92] Flatcar Update Engine starting Jan 23 01:05:56.845166 extend-filesystems[1659]: Old size kept for /dev/nvme0n1p9 Jan 23 01:05:56.845883 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:05:56.846083 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:05:56.850286 chronyd[1653]: Timezone right/UTC failed leap second check, ignoring Jan 23 01:05:56.850534 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 01:05:56.850410 chronyd[1653]: Loaded seccomp filter (level 2) Jan 23 01:05:56.859810 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:05:56.880394 tar[1685]: linux-amd64/LICENSE Jan 23 01:05:56.881784 tar[1685]: linux-amd64/helm Jan 23 01:05:56.886948 systemd-logind[1676]: New seat seat0. Jan 23 01:05:56.892328 systemd-logind[1676]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:05:56.892460 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:05:57.003801 bash[1730]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:05:57.004667 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:05:57.009439 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 01:05:57.109567 dbus-daemon[1656]: [system] SELinux support is enabled Jan 23 01:05:57.110975 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:05:57.116328 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:05:57.116356 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:05:57.118802 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:05:57.118830 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:05:57.123030 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:05:57.123711 dbus-daemon[1656]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 01:05:57.124183 update_engine[1677]: I20260123 01:05:57.123930 1677 update_check_scheduler.cc:74] Next update check in 10m45s Jan 23 01:05:57.132629 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:05:57.209426 coreos-metadata[1655]: Jan 23 01:05:57.208 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 01:05:57.212614 coreos-metadata[1655]: Jan 23 01:05:57.212 INFO Fetch successful Jan 23 01:05:57.212767 coreos-metadata[1655]: Jan 23 01:05:57.212 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 01:05:57.216643 coreos-metadata[1655]: Jan 23 01:05:57.216 INFO Fetch successful Jan 23 01:05:57.216643 coreos-metadata[1655]: Jan 23 01:05:57.216 INFO Fetching http://168.63.129.16/machine/a6279d42-a8d0-4716-9c41-01dbd550df0b/5de0862e%2D1f4f%2D4271%2D9c0d%2D06da44c5e23a.%5Fci%2D4459.2.2%2Dn%2D059e17308a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 01:05:57.218037 coreos-metadata[1655]: Jan 23 01:05:57.217 INFO Fetch successful Jan 23 01:05:57.218037 coreos-metadata[1655]: Jan 23 01:05:57.218 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 01:05:57.228685 coreos-metadata[1655]: Jan 23 01:05:57.227 INFO Fetch successful Jan 23 01:05:57.266745 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:05:57.269378 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:05:57.439568 tar[1685]: linux-amd64/README.md Jan 23 01:05:57.460932 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:05:57.532539 sshd_keygen[1705]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:05:57.554448 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:05:57.559165 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:05:57.562927 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 01:05:57.575267 locksmithd[1754]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:05:57.578355 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:05:57.578528 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:05:57.581975 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:05:57.598605 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:05:57.605911 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 01:05:57.610842 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:05:57.618905 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:05:57.622239 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:05:57.932728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:05:58.164043 (kubelet)[1795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:05:58.548999 kubelet[1795]: E0123 01:05:58.548936 1795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:05:58.550682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:05:58.550818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:05:58.551149 systemd[1]: kubelet.service: Consumed 812ms CPU time, 256.7M memory peak. Jan 23 01:05:58.881808 containerd[1689]: time="2026-01-23T01:05:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:05:58.882414 containerd[1689]: time="2026-01-23T01:05:58.882386224Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:05:58.888012 containerd[1689]: time="2026-01-23T01:05:58.887980130Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.613µs" Jan 23 01:05:58.888012 containerd[1689]: time="2026-01-23T01:05:58.888003085Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:05:58.888096 containerd[1689]: time="2026-01-23T01:05:58.888019747Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:05:58.888177 containerd[1689]: time="2026-01-23T01:05:58.888160398Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:05:58.888177 containerd[1689]: time="2026-01-23T01:05:58.888173288Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:05:58.888226 containerd[1689]: time="2026-01-23T01:05:58.888192713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:05:58.888256 containerd[1689]: time="2026-01-23T01:05:58.888240452Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:05:58.888256 containerd[1689]: time="2026-01-23T01:05:58.888252064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:05:58.888436 containerd[1689]: time="2026-01-23T01:05:58.888418442Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:05:58.888436 containerd[1689]: time="2026-01-23T01:05:58.888429638Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:05:58.888475 containerd[1689]: time="2026-01-23T01:05:58.888438006Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:05:58.888475 containerd[1689]: time="2026-01-23T01:05:58.888445104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:05:58.888512 containerd[1689]: time="2026-01-23T01:05:58.888502243Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:05:58.888649 containerd[1689]: time="2026-01-23T01:05:58.888632194Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:05:58.888675 containerd[1689]: time="2026-01-23T01:05:58.888653170Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:05:58.888675 containerd[1689]: time="2026-01-23T01:05:58.888661458Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:05:58.888707 containerd[1689]: time="2026-01-23T01:05:58.888688449Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:05:58.888922 containerd[1689]: time="2026-01-23T01:05:58.888891913Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:05:58.888984 containerd[1689]: time="2026-01-23T01:05:58.888962597Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:05:58.900472 containerd[1689]: time="2026-01-23T01:05:58.900445026Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:05:58.900523 containerd[1689]: time="2026-01-23T01:05:58.900484140Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:05:58.900523 containerd[1689]: time="2026-01-23T01:05:58.900498969Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:05:58.900523 containerd[1689]: time="2026-01-23T01:05:58.900509995Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:05:58.900523 containerd[1689]: time="2026-01-23T01:05:58.900520956Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:05:58.900597 containerd[1689]: time="2026-01-23T01:05:58.900531412Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:05:58.900597 containerd[1689]: time="2026-01-23T01:05:58.900543591Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:05:58.900597 containerd[1689]: time="2026-01-23T01:05:58.900554994Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:05:58.900597 containerd[1689]: time="2026-01-23T01:05:58.900565330Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:05:58.900597 containerd[1689]: time="2026-01-23T01:05:58.900575104Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:05:58.900597 containerd[1689]: time="2026-01-23T01:05:58.900584506Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:05:58.900597 containerd[1689]: time="2026-01-23T01:05:58.900595876Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:05:58.900711 containerd[1689]: time="2026-01-23T01:05:58.900684379Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:05:58.900711 containerd[1689]: time="2026-01-23T01:05:58.900699657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:05:58.900742 containerd[1689]: time="2026-01-23T01:05:58.900717655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:05:58.900742 containerd[1689]: time="2026-01-23T01:05:58.900728509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:05:58.900742 containerd[1689]: time="2026-01-23T01:05:58.900738132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:05:58.900788 containerd[1689]: time="2026-01-23T01:05:58.900747005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:05:58.900788 containerd[1689]: time="2026-01-23T01:05:58.900757142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:05:58.900788 containerd[1689]: time="2026-01-23T01:05:58.900779379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:05:58.900841 containerd[1689]: time="2026-01-23T01:05:58.900792366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:05:58.900841 containerd[1689]: time="2026-01-23T01:05:58.900801990Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:05:58.900841 containerd[1689]: time="2026-01-23T01:05:58.900813430Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:05:58.900891 containerd[1689]: time="2026-01-23T01:05:58.900853035Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:05:58.900891 containerd[1689]: time="2026-01-23T01:05:58.900864559Z" level=info msg="Start snapshots syncer" Jan 23 01:05:58.900891 containerd[1689]: time="2026-01-23T01:05:58.900883090Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:05:58.901095 containerd[1689]: time="2026-01-23T01:05:58.901061397Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:05:58.901214 containerd[1689]: time="2026-01-23T01:05:58.901105167Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:05:58.901214 containerd[1689]: time="2026-01-23T01:05:58.901147684Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:05:58.901254 containerd[1689]: time="2026-01-23T01:05:58.901219839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:05:58.901254 containerd[1689]: time="2026-01-23T01:05:58.901236116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:05:58.901254 containerd[1689]: time="2026-01-23T01:05:58.901245794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:05:58.901304 containerd[1689]: time="2026-01-23T01:05:58.901255174Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:05:58.901304 containerd[1689]: time="2026-01-23T01:05:58.901266064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:05:58.901304 containerd[1689]: time="2026-01-23T01:05:58.901276117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:05:58.901304 containerd[1689]: time="2026-01-23T01:05:58.901285615Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:05:58.901368 containerd[1689]: time="2026-01-23T01:05:58.901305319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:05:58.901368 containerd[1689]: time="2026-01-23T01:05:58.901315940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:05:58.901607 containerd[1689]: time="2026-01-23T01:05:58.901576507Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:05:58.902494 containerd[1689]: time="2026-01-23T01:05:58.902469052Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:05:58.902811 containerd[1689]: time="2026-01-23T01:05:58.902700048Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:05:58.902811 containerd[1689]: time="2026-01-23T01:05:58.902717594Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:05:58.902811 containerd[1689]: time="2026-01-23T01:05:58.902727733Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:05:58.902811 containerd[1689]: time="2026-01-23T01:05:58.902735300Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:05:58.902811 containerd[1689]: time="2026-01-23T01:05:58.902744994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:05:58.902811 containerd[1689]: time="2026-01-23T01:05:58.902761118Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:05:58.902811 containerd[1689]: time="2026-01-23T01:05:58.902775390Z" level=info msg="runtime interface created" Jan 23 01:05:58.902811 containerd[1689]: time="2026-01-23T01:05:58.902780033Z" level=info msg="created NRI interface" Jan 23 01:05:58.902811 containerd[1689]: time="2026-01-23T01:05:58.902787263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:05:58.902811 containerd[1689]: time="2026-01-23T01:05:58.902798234Z" level=info msg="Connect containerd service" Jan 23 01:05:58.902998 containerd[1689]: time="2026-01-23T01:05:58.902821307Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:05:58.903413 containerd[1689]: time="2026-01-23T01:05:58.903390796Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:05:59.933307 containerd[1689]: time="2026-01-23T01:05:59.933240012Z" level=info msg="Start subscribing containerd event" Jan 23 01:05:59.933623 containerd[1689]: time="2026-01-23T01:05:59.933404118Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:05:59.933623 containerd[1689]: time="2026-01-23T01:05:59.933442398Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:05:59.935008 containerd[1689]: time="2026-01-23T01:05:59.933677896Z" level=info msg="Start recovering state" Jan 23 01:05:59.935008 containerd[1689]: time="2026-01-23T01:05:59.933799103Z" level=info msg="Start event monitor" Jan 23 01:05:59.935008 containerd[1689]: time="2026-01-23T01:05:59.933833013Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:05:59.935008 containerd[1689]: time="2026-01-23T01:05:59.933840879Z" level=info msg="Start streaming server" Jan 23 01:05:59.935008 containerd[1689]: time="2026-01-23T01:05:59.933852966Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:05:59.935008 containerd[1689]: time="2026-01-23T01:05:59.933859796Z" level=info msg="runtime interface starting up..." Jan 23 01:05:59.935008 containerd[1689]: time="2026-01-23T01:05:59.933865649Z" level=info msg="starting plugins..." Jan 23 01:05:59.935008 containerd[1689]: time="2026-01-23T01:05:59.933877630Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:05:59.934075 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:05:59.937254 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:05:59.938289 containerd[1689]: time="2026-01-23T01:05:59.937515873Z" level=info msg="containerd successfully booted in 1.056134s" Jan 23 01:05:59.939455 systemd[1]: Startup finished in 2.900s (kernel) + 14.948s (initrd) + 20.716s (userspace) = 38.565s. Jan 23 01:06:00.410183 waagent[1787]: 2026-01-23T01:06:00.410031Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.410374Z INFO Daemon Daemon OS: flatcar 4459.2.2 Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.410517Z INFO Daemon Daemon Python: 3.11.13 Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.410785Z INFO Daemon Daemon Run daemon Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.411196Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.411461Z INFO Daemon Daemon Using waagent for provisioning Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.411638Z INFO Daemon Daemon Activate resource disk Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.412055Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.413619Z INFO Daemon Daemon Found device: None Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.414027Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.414274Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.415047Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.415287Z INFO Daemon Daemon Running default provisioning handler Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.424640Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.425896Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.426105Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 01:06:00.434906 waagent[1787]: 2026-01-23T01:06:00.426639Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 01:06:00.542001 login[1790]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 23 01:06:00.542165 login[1789]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 01:06:00.549686 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:06:00.550817 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:06:00.552693 systemd-logind[1676]: New session 1 of user core. Jan 23 01:06:00.569598 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:06:00.571419 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:06:00.582641 (systemd)[1834]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:06:00.584434 systemd-logind[1676]: New session c1 of user core. Jan 23 01:06:00.716498 waagent[1787]: 2026-01-23T01:06:00.714277Z INFO Daemon Daemon Successfully mounted dvd Jan 23 01:06:00.777138 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 01:06:00.780934 waagent[1787]: 2026-01-23T01:06:00.780880Z INFO Daemon Daemon Detect protocol endpoint Jan 23 01:06:00.783093 waagent[1787]: 2026-01-23T01:06:00.783052Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 01:06:00.785428 waagent[1787]: 2026-01-23T01:06:00.785356Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 01:06:00.787886 waagent[1787]: 2026-01-23T01:06:00.787835Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 01:06:00.788167 systemd[1834]: Queued start job for default target default.target. Jan 23 01:06:00.788353 waagent[1787]: 2026-01-23T01:06:00.788228Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 01:06:00.790459 waagent[1787]: 2026-01-23T01:06:00.790421Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 01:06:00.795695 systemd[1834]: Created slice app.slice - User Application Slice. Jan 23 01:06:00.795917 systemd[1834]: Reached target paths.target - Paths. Jan 23 01:06:00.795958 systemd[1834]: Reached target timers.target - Timers. Jan 23 01:06:00.798239 systemd[1834]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:06:00.800936 waagent[1787]: 2026-01-23T01:06:00.800711Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 01:06:00.802817 waagent[1787]: 2026-01-23T01:06:00.802354Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 01:06:00.802817 waagent[1787]: 2026-01-23T01:06:00.802611Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 01:06:00.809578 systemd[1834]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:06:00.809661 systemd[1834]: Reached target sockets.target - Sockets. Jan 23 01:06:00.809731 systemd[1834]: Reached target basic.target - Basic System. Jan 23 01:06:00.809777 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:06:00.810616 systemd[1834]: Reached target default.target - Main User Target. Jan 23 01:06:00.810646 systemd[1834]: Startup finished in 221ms. Jan 23 01:06:00.813273 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:06:00.961867 waagent[1787]: 2026-01-23T01:06:00.961803Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 01:06:00.964608 waagent[1787]: 2026-01-23T01:06:00.962014Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 01:06:00.968346 waagent[1787]: 2026-01-23T01:06:00.968269Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 01:06:00.993943 waagent[1787]: 2026-01-23T01:06:00.993911Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 01:06:01.001156 waagent[1787]: 2026-01-23T01:06:00.994441Z INFO Daemon Jan 23 01:06:01.001156 waagent[1787]: 2026-01-23T01:06:00.994640Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ddd6d314-5e9b-4ecb-b9d9-3f55f86f2fed eTag: 9124287074592754604 source: Fabric] Jan 23 01:06:01.001156 waagent[1787]: 2026-01-23T01:06:00.994897Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 01:06:01.001156 waagent[1787]: 2026-01-23T01:06:00.995490Z INFO Daemon Jan 23 01:06:01.001156 waagent[1787]: 2026-01-23T01:06:00.995669Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 01:06:01.003151 waagent[1787]: 2026-01-23T01:06:01.002663Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 01:06:01.080086 waagent[1787]: 2026-01-23T01:06:01.080037Z INFO Daemon Downloaded certificate {'thumbprint': 'A6D2033A87649556DDA588F7BB91E40CE37D9388', 'hasPrivateKey': True} Jan 23 01:06:01.081057 waagent[1787]: 2026-01-23T01:06:01.080513Z INFO Daemon Fetch goal state completed Jan 23 01:06:01.085835 waagent[1787]: 2026-01-23T01:06:01.085762Z INFO Daemon Daemon Starting provisioning Jan 23 01:06:01.087362 waagent[1787]: 2026-01-23T01:06:01.085944Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 01:06:01.087362 waagent[1787]: 2026-01-23T01:06:01.086216Z INFO Daemon Daemon Set hostname [ci-4459.2.2-n-059e17308a] Jan 23 01:06:01.090941 waagent[1787]: 2026-01-23T01:06:01.090905Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-n-059e17308a] Jan 23 01:06:01.096700 waagent[1787]: 2026-01-23T01:06:01.091179Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 01:06:01.096700 waagent[1787]: 2026-01-23T01:06:01.091758Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 01:06:01.099287 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:01.099294 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:06:01.099316 systemd-networkd[1477]: eth0: DHCP lease lost Jan 23 01:06:01.100163 waagent[1787]: 2026-01-23T01:06:01.099956Z INFO Daemon Daemon Create user account if not exists Jan 23 01:06:01.101506 waagent[1787]: 2026-01-23T01:06:01.101431Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 01:06:01.102933 waagent[1787]: 2026-01-23T01:06:01.101891Z INFO Daemon Daemon Configure sudoer Jan 23 01:06:01.107296 waagent[1787]: 2026-01-23T01:06:01.107248Z INFO Daemon Daemon Configure sshd Jan 23 01:06:01.110722 waagent[1787]: 2026-01-23T01:06:01.110678Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 01:06:01.115741 waagent[1787]: 2026-01-23T01:06:01.110824Z INFO Daemon Daemon Deploy ssh public key. Jan 23 01:06:01.128167 systemd-networkd[1477]: eth0: DHCPv4 address 10.200.8.21/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 01:06:01.543678 login[1790]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 01:06:01.547895 systemd-logind[1676]: New session 2 of user core. Jan 23 01:06:01.555252 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:06:02.190504 waagent[1787]: 2026-01-23T01:06:02.190442Z INFO Daemon Daemon Provisioning complete Jan 23 01:06:02.199457 waagent[1787]: 2026-01-23T01:06:02.199422Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 01:06:02.200396 waagent[1787]: 2026-01-23T01:06:02.199617Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 01:06:02.200396 waagent[1787]: 2026-01-23T01:06:02.199873Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 01:06:02.295924 waagent[1878]: 2026-01-23T01:06:02.295861Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 01:06:02.296227 waagent[1878]: 2026-01-23T01:06:02.295956Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Jan 23 01:06:02.296227 waagent[1878]: 2026-01-23T01:06:02.295993Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 01:06:02.296227 waagent[1878]: 2026-01-23T01:06:02.296031Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 23 01:06:02.381354 waagent[1878]: 2026-01-23T01:06:02.381298Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 01:06:02.381489 waagent[1878]: 2026-01-23T01:06:02.381464Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 01:06:02.381532 waagent[1878]: 2026-01-23T01:06:02.381514Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 01:06:02.387627 waagent[1878]: 2026-01-23T01:06:02.387120Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 01:06:02.393577 waagent[1878]: 2026-01-23T01:06:02.393543Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 01:06:02.393893 waagent[1878]: 2026-01-23T01:06:02.393864Z INFO ExtHandler Jan 23 01:06:02.393933 waagent[1878]: 2026-01-23T01:06:02.393917Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0b4a9232-0fdc-432a-8e24-bd5aff8b3628 eTag: 9124287074592754604 source: Fabric] Jan 23 01:06:02.394135 waagent[1878]: 2026-01-23T01:06:02.394109Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 01:06:02.394470 waagent[1878]: 2026-01-23T01:06:02.394447Z INFO ExtHandler Jan 23 01:06:02.394514 waagent[1878]: 2026-01-23T01:06:02.394483Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 01:06:02.398647 waagent[1878]: 2026-01-23T01:06:02.398615Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 01:06:02.496991 waagent[1878]: 2026-01-23T01:06:02.496505Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A6D2033A87649556DDA588F7BB91E40CE37D9388', 'hasPrivateKey': True} Jan 23 01:06:02.496991 waagent[1878]: 2026-01-23T01:06:02.496896Z INFO ExtHandler Fetch goal state completed Jan 23 01:06:02.507399 waagent[1878]: 2026-01-23T01:06:02.507356Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 23 01:06:02.511070 waagent[1878]: 2026-01-23T01:06:02.511017Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1878 Jan 23 01:06:02.511178 waagent[1878]: 2026-01-23T01:06:02.511161Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 01:06:02.511406 waagent[1878]: 2026-01-23T01:06:02.511381Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 01:06:02.512362 waagent[1878]: 2026-01-23T01:06:02.512336Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 01:06:02.512619 waagent[1878]: 2026-01-23T01:06:02.512597Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 01:06:02.512709 waagent[1878]: 2026-01-23T01:06:02.512693Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 01:06:02.513041 waagent[1878]: 2026-01-23T01:06:02.513021Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 01:06:02.665116 waagent[1878]: 2026-01-23T01:06:02.665086Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 01:06:02.665286 waagent[1878]: 2026-01-23T01:06:02.665263Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 01:06:02.670905 waagent[1878]: 2026-01-23T01:06:02.670540Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 01:06:02.675846 systemd[1]: Reload requested from client PID 1893 ('systemctl') (unit waagent.service)... Jan 23 01:06:02.675860 systemd[1]: Reloading... Jan 23 01:06:02.754157 zram_generator::config[1932]: No configuration found. Jan 23 01:06:02.806155 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#86 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jan 23 01:06:02.923496 systemd[1]: Reloading finished in 247 ms. Jan 23 01:06:02.947782 waagent[1878]: 2026-01-23T01:06:02.946473Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 01:06:02.947782 waagent[1878]: 2026-01-23T01:06:02.946579Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 01:06:03.470737 waagent[1878]: 2026-01-23T01:06:03.470667Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 01:06:03.471025 waagent[1878]: 2026-01-23T01:06:03.471001Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 01:06:03.471811 waagent[1878]: 2026-01-23T01:06:03.471715Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 01:06:03.471811 waagent[1878]: 2026-01-23T01:06:03.471777Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 01:06:03.472011 waagent[1878]: 2026-01-23T01:06:03.471982Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 01:06:03.472208 waagent[1878]: 2026-01-23T01:06:03.472187Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 01:06:03.472387 waagent[1878]: 2026-01-23T01:06:03.472347Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 01:06:03.472590 waagent[1878]: 2026-01-23T01:06:03.472408Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 01:06:03.472590 waagent[1878]: 2026-01-23T01:06:03.472565Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 01:06:03.472644 waagent[1878]: 2026-01-23T01:06:03.472623Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 01:06:03.472782 waagent[1878]: 2026-01-23T01:06:03.472765Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 01:06:03.472877 waagent[1878]: 2026-01-23T01:06:03.472857Z INFO EnvHandler ExtHandler Configure routes Jan 23 01:06:03.472929 waagent[1878]: 2026-01-23T01:06:03.472902Z INFO EnvHandler ExtHandler Gateway:None Jan 23 01:06:03.472962 waagent[1878]: 2026-01-23T01:06:03.472947Z INFO EnvHandler ExtHandler Routes:None Jan 23 01:06:03.473457 waagent[1878]: 2026-01-23T01:06:03.473433Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 01:06:03.473549 waagent[1878]: 2026-01-23T01:06:03.473496Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 01:06:03.473549 waagent[1878]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 01:06:03.473549 waagent[1878]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 01:06:03.473549 waagent[1878]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 01:06:03.473549 waagent[1878]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 01:06:03.473549 waagent[1878]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 01:06:03.473549 waagent[1878]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 01:06:03.473804 waagent[1878]: 2026-01-23T01:06:03.473564Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 01:06:03.473804 waagent[1878]: 2026-01-23T01:06:03.473774Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 01:06:03.483041 waagent[1878]: 2026-01-23T01:06:03.483009Z INFO ExtHandler ExtHandler Jan 23 01:06:03.483099 waagent[1878]: 2026-01-23T01:06:03.483069Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 09127808-fd5b-4a4e-9bca-d301a5676363 correlation 2b066935-e308-4336-901e-af1739723e29 created: 2026-01-23T01:04:35.218544Z] Jan 23 01:06:03.483384 waagent[1878]: 2026-01-23T01:06:03.483364Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 01:06:03.484540 waagent[1878]: 2026-01-23T01:06:03.484501Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 23 01:06:03.515383 waagent[1878]: 2026-01-23T01:06:03.515341Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 01:06:03.515383 waagent[1878]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 01:06:03.515651 waagent[1878]: 2026-01-23T01:06:03.515628Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 57E75395-F72A-47CB-966A-E1BB11F7E288;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 01:06:03.599578 waagent[1878]: 2026-01-23T01:06:03.599534Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 01:06:03.599578 waagent[1878]: Executing ['ip', '-a', '-o', 'link']: Jan 23 01:06:03.599578 waagent[1878]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 01:06:03.599578 waagent[1878]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:46:fb:96 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jan 23 01:06:03.599578 waagent[1878]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:46:fb:96 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 23 01:06:03.599578 waagent[1878]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 01:06:03.599578 waagent[1878]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 01:06:03.599578 waagent[1878]: 2: eth0 inet 10.200.8.21/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 01:06:03.599578 waagent[1878]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 01:06:03.599578 waagent[1878]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 01:06:03.599578 waagent[1878]: 2: eth0 inet6 fe80::7eed:8dff:fe46:fb96/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 01:06:03.659039 waagent[1878]: 2026-01-23T01:06:03.658991Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 01:06:03.659039 waagent[1878]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:06:03.659039 waagent[1878]: pkts bytes target prot opt in out source destination Jan 23 01:06:03.659039 waagent[1878]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:06:03.659039 waagent[1878]: pkts bytes target prot opt in out source destination Jan 23 01:06:03.659039 waagent[1878]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:06:03.659039 waagent[1878]: pkts bytes target prot opt in out source destination Jan 23 01:06:03.659039 waagent[1878]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 01:06:03.659039 waagent[1878]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 01:06:03.659039 waagent[1878]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 01:06:03.661685 waagent[1878]: 2026-01-23T01:06:03.661638Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 01:06:03.661685 waagent[1878]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:06:03.661685 waagent[1878]: pkts bytes target prot opt in out source destination Jan 23 01:06:03.661685 waagent[1878]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:06:03.661685 waagent[1878]: pkts bytes target prot opt in out source destination Jan 23 01:06:03.661685 waagent[1878]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:06:03.661685 waagent[1878]: pkts bytes target prot opt in out source destination Jan 23 01:06:03.661685 waagent[1878]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 01:06:03.661685 waagent[1878]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 01:06:03.661685 waagent[1878]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 01:06:08.801694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:06:08.803037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:09.310931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:09.313757 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:06:09.346353 kubelet[2031]: E0123 01:06:09.346319 2031 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:06:09.348749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:06:09.348867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:06:09.349189 systemd[1]: kubelet.service: Consumed 122ms CPU time, 108.9M memory peak. Jan 23 01:06:19.599722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:06:19.601204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:20.061088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:20.066428 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:06:20.097728 kubelet[2046]: E0123 01:06:20.097698 2046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:06:20.099230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:06:20.099353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:06:20.099649 systemd[1]: kubelet.service: Consumed 118ms CPU time, 110.1M memory peak. Jan 23 01:06:20.632427 chronyd[1653]: Selected source PHC0 Jan 23 01:06:21.431423 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:06:21.432536 systemd[1]: Started sshd@0-10.200.8.21:22-10.200.16.10:47990.service - OpenSSH per-connection server daemon (10.200.16.10:47990). Jan 23 01:06:22.208290 sshd[2054]: Accepted publickey for core from 10.200.16.10 port 47990 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:06:22.209353 sshd-session[2054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:22.213631 systemd-logind[1676]: New session 3 of user core. Jan 23 01:06:22.221271 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:06:22.807663 systemd[1]: Started sshd@1-10.200.8.21:22-10.200.16.10:48002.service - OpenSSH per-connection server daemon (10.200.16.10:48002). Jan 23 01:06:23.492162 sshd[2060]: Accepted publickey for core from 10.200.16.10 port 48002 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:06:23.492827 sshd-session[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:23.496697 systemd-logind[1676]: New session 4 of user core. Jan 23 01:06:23.499270 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:06:23.984435 sshd[2063]: Connection closed by 10.200.16.10 port 48002 Jan 23 01:06:23.985291 sshd-session[2060]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:23.987597 systemd[1]: sshd@1-10.200.8.21:22-10.200.16.10:48002.service: Deactivated successfully. Jan 23 01:06:23.988932 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:06:23.990830 systemd-logind[1676]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:06:23.991497 systemd-logind[1676]: Removed session 4. Jan 23 01:06:24.116308 systemd[1]: Started sshd@2-10.200.8.21:22-10.200.16.10:48004.service - OpenSSH per-connection server daemon (10.200.16.10:48004). Jan 23 01:06:24.793871 sshd[2069]: Accepted publickey for core from 10.200.16.10 port 48004 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:06:24.794298 sshd-session[2069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:24.798832 systemd-logind[1676]: New session 5 of user core. Jan 23 01:06:24.804284 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:06:25.311933 sshd[2072]: Connection closed by 10.200.16.10 port 48004 Jan 23 01:06:25.313298 sshd-session[2069]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:25.316163 systemd[1]: sshd@2-10.200.8.21:22-10.200.16.10:48004.service: Deactivated successfully. Jan 23 01:06:25.317775 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:06:25.318404 systemd-logind[1676]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:06:25.319544 systemd-logind[1676]: Removed session 5. Jan 23 01:06:25.542568 systemd[1]: Started sshd@3-10.200.8.21:22-10.200.16.10:48006.service - OpenSSH per-connection server daemon (10.200.16.10:48006). Jan 23 01:06:26.243210 sshd[2078]: Accepted publickey for core from 10.200.16.10 port 48006 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:06:26.244261 sshd-session[2078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:26.248280 systemd-logind[1676]: New session 6 of user core. Jan 23 01:06:26.256292 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:06:26.717497 sshd[2081]: Connection closed by 10.200.16.10 port 48006 Jan 23 01:06:26.718311 sshd-session[2078]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:26.720949 systemd[1]: sshd@3-10.200.8.21:22-10.200.16.10:48006.service: Deactivated successfully. Jan 23 01:06:26.722398 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:06:26.723533 systemd-logind[1676]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:06:26.724659 systemd-logind[1676]: Removed session 6. Jan 23 01:06:26.836368 systemd[1]: Started sshd@4-10.200.8.21:22-10.200.16.10:48014.service - OpenSSH per-connection server daemon (10.200.16.10:48014). Jan 23 01:06:27.521805 sshd[2087]: Accepted publickey for core from 10.200.16.10 port 48014 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:06:27.522737 sshd-session[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:27.526639 systemd-logind[1676]: New session 7 of user core. Jan 23 01:06:27.531282 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:06:27.922877 sudo[2091]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:06:27.923093 sudo[2091]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:06:27.939874 sudo[2091]: pam_unix(sudo:session): session closed for user root Jan 23 01:06:28.047833 sshd[2090]: Connection closed by 10.200.16.10 port 48014 Jan 23 01:06:28.049326 sshd-session[2087]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:28.051800 systemd[1]: sshd@4-10.200.8.21:22-10.200.16.10:48014.service: Deactivated successfully. Jan 23 01:06:28.053633 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:06:28.055174 systemd-logind[1676]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:06:28.055943 systemd-logind[1676]: Removed session 7. Jan 23 01:06:28.165522 systemd[1]: Started sshd@5-10.200.8.21:22-10.200.16.10:48024.service - OpenSSH per-connection server daemon (10.200.16.10:48024). Jan 23 01:06:28.843507 sshd[2097]: Accepted publickey for core from 10.200.16.10 port 48024 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:06:28.844565 sshd-session[2097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:28.848852 systemd-logind[1676]: New session 8 of user core. Jan 23 01:06:28.854275 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:06:29.210392 sudo[2102]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:06:29.210591 sudo[2102]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:06:29.216870 sudo[2102]: pam_unix(sudo:session): session closed for user root Jan 23 01:06:29.220498 sudo[2101]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:06:29.220689 sudo[2101]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:06:29.227591 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:06:29.257006 augenrules[2124]: No rules Jan 23 01:06:29.257431 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:06:29.257586 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:06:29.258689 sudo[2101]: pam_unix(sudo:session): session closed for user root Jan 23 01:06:29.366334 sshd[2100]: Connection closed by 10.200.16.10 port 48024 Jan 23 01:06:29.366696 sshd-session[2097]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:29.369515 systemd[1]: sshd@5-10.200.8.21:22-10.200.16.10:48024.service: Deactivated successfully. Jan 23 01:06:29.370633 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:06:29.372296 systemd-logind[1676]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:06:29.372914 systemd-logind[1676]: Removed session 8. Jan 23 01:06:29.485154 systemd[1]: Started sshd@6-10.200.8.21:22-10.200.16.10:48032.service - OpenSSH per-connection server daemon (10.200.16.10:48032). Jan 23 01:06:30.163332 sshd[2133]: Accepted publickey for core from 10.200.16.10 port 48032 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:06:30.164320 sshd-session[2133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:30.165228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 01:06:30.168326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:30.170830 systemd-logind[1676]: New session 9 of user core. Jan 23 01:06:30.175980 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:06:30.533698 sudo[2140]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:06:30.533952 sudo[2140]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:06:30.662954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:30.672371 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:06:30.703902 kubelet[2150]: E0123 01:06:30.703871 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:06:30.704981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:06:30.705069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:06:30.705474 systemd[1]: kubelet.service: Consumed 123ms CPU time, 109.8M memory peak. Jan 23 01:06:33.961100 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:06:33.975423 (dockerd)[2170]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:06:34.470024 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 23 01:06:35.185799 dockerd[2170]: time="2026-01-23T01:06:35.185750214Z" level=info msg="Starting up" Jan 23 01:06:35.186497 dockerd[2170]: time="2026-01-23T01:06:35.186474309Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:06:35.195242 dockerd[2170]: time="2026-01-23T01:06:35.195203033Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:06:35.220124 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport956912608-merged.mount: Deactivated successfully. Jan 23 01:06:35.261273 dockerd[2170]: time="2026-01-23T01:06:35.261241326Z" level=info msg="Loading containers: start." Jan 23 01:06:35.342142 kernel: Initializing XFRM netlink socket Jan 23 01:06:35.578796 systemd-networkd[1477]: docker0: Link UP Jan 23 01:06:35.590675 dockerd[2170]: time="2026-01-23T01:06:35.590645131Z" level=info msg="Loading containers: done." Jan 23 01:06:35.600694 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3742380966-merged.mount: Deactivated successfully. Jan 23 01:06:35.618552 dockerd[2170]: time="2026-01-23T01:06:35.618521370Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:06:35.618664 dockerd[2170]: time="2026-01-23T01:06:35.618587899Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:06:35.618664 dockerd[2170]: time="2026-01-23T01:06:35.618651828Z" level=info msg="Initializing buildkit" Jan 23 01:06:35.668985 dockerd[2170]: time="2026-01-23T01:06:35.668818528Z" level=info msg="Completed buildkit initialization" Jan 23 01:06:35.671795 dockerd[2170]: time="2026-01-23T01:06:35.671758109Z" level=info msg="Daemon has completed initialization" Jan 23 01:06:35.672192 dockerd[2170]: time="2026-01-23T01:06:35.671875999Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:06:35.671937 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:06:37.185741 containerd[1689]: time="2026-01-23T01:06:37.185706735Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 01:06:38.000733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3654924476.mount: Deactivated successfully. Jan 23 01:06:39.538842 containerd[1689]: time="2026-01-23T01:06:39.538795523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:39.540916 containerd[1689]: time="2026-01-23T01:06:39.540808275Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068081" Jan 23 01:06:39.553010 containerd[1689]: time="2026-01-23T01:06:39.552981286Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:39.557275 containerd[1689]: time="2026-01-23T01:06:39.557244270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:39.557987 containerd[1689]: time="2026-01-23T01:06:39.557964365Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.372224933s" Jan 23 01:06:39.558036 containerd[1689]: time="2026-01-23T01:06:39.557999322Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 01:06:39.558665 containerd[1689]: time="2026-01-23T01:06:39.558525404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 01:06:40.722506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 01:06:40.724086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:41.234237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:41.241541 (kubelet)[2447]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:06:41.288283 kubelet[2447]: E0123 01:06:41.287295 2447 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:06:41.291023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:06:41.291161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:06:41.291436 systemd[1]: kubelet.service: Consumed 133ms CPU time, 110.8M memory peak. Jan 23 01:06:41.417483 containerd[1689]: time="2026-01-23T01:06:41.417450042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:41.419927 containerd[1689]: time="2026-01-23T01:06:41.419820654Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162448" Jan 23 01:06:41.422396 containerd[1689]: time="2026-01-23T01:06:41.422375204Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:41.426407 containerd[1689]: time="2026-01-23T01:06:41.426366722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:41.427070 containerd[1689]: time="2026-01-23T01:06:41.426861069Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.868308633s" Jan 23 01:06:41.427070 containerd[1689]: time="2026-01-23T01:06:41.426890765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 01:06:41.427277 containerd[1689]: time="2026-01-23T01:06:41.427263911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 01:06:42.381228 update_engine[1677]: I20260123 01:06:42.381161 1677 update_attempter.cc:509] Updating boot flags... Jan 23 01:06:42.785532 containerd[1689]: time="2026-01-23T01:06:42.785495617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:42.788141 containerd[1689]: time="2026-01-23T01:06:42.788004655Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725935" Jan 23 01:06:42.790801 containerd[1689]: time="2026-01-23T01:06:42.790781117Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:42.794437 containerd[1689]: time="2026-01-23T01:06:42.794412683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:42.795362 containerd[1689]: time="2026-01-23T01:06:42.795337063Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.368046178s" Jan 23 01:06:42.795452 containerd[1689]: time="2026-01-23T01:06:42.795440078Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 01:06:42.797803 containerd[1689]: time="2026-01-23T01:06:42.797777506Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 01:06:43.783844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628192350.mount: Deactivated successfully. Jan 23 01:06:44.041844 containerd[1689]: time="2026-01-23T01:06:44.041752612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:44.044329 containerd[1689]: time="2026-01-23T01:06:44.044255048Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965301" Jan 23 01:06:44.046846 containerd[1689]: time="2026-01-23T01:06:44.046824703Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:44.050331 containerd[1689]: time="2026-01-23T01:06:44.049998791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:44.050331 containerd[1689]: time="2026-01-23T01:06:44.050226313Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.252413528s" Jan 23 01:06:44.050331 containerd[1689]: time="2026-01-23T01:06:44.050249726Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 01:06:44.050759 containerd[1689]: time="2026-01-23T01:06:44.050737744Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 01:06:44.577008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063976099.mount: Deactivated successfully. Jan 23 01:06:45.782959 containerd[1689]: time="2026-01-23T01:06:45.782912220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:45.785177 containerd[1689]: time="2026-01-23T01:06:45.785153721Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Jan 23 01:06:45.787940 containerd[1689]: time="2026-01-23T01:06:45.787814579Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:45.791446 containerd[1689]: time="2026-01-23T01:06:45.791405409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:45.792103 containerd[1689]: time="2026-01-23T01:06:45.791991356Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.741229039s" Jan 23 01:06:45.792103 containerd[1689]: time="2026-01-23T01:06:45.792019405Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 01:06:45.792588 containerd[1689]: time="2026-01-23T01:06:45.792572292Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 01:06:46.317843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1606679585.mount: Deactivated successfully. Jan 23 01:06:46.333621 containerd[1689]: time="2026-01-23T01:06:46.333585236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:46.335937 containerd[1689]: time="2026-01-23T01:06:46.335913111Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Jan 23 01:06:46.338490 containerd[1689]: time="2026-01-23T01:06:46.338455863Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:46.341634 containerd[1689]: time="2026-01-23T01:06:46.341598987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:46.342237 containerd[1689]: time="2026-01-23T01:06:46.341960050Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 549.294587ms" Jan 23 01:06:46.342237 containerd[1689]: time="2026-01-23T01:06:46.341989339Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 01:06:46.342453 containerd[1689]: time="2026-01-23T01:06:46.342437475Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 01:06:46.934876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556485679.mount: Deactivated successfully. Jan 23 01:06:49.709285 containerd[1689]: time="2026-01-23T01:06:49.709239227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:49.711510 containerd[1689]: time="2026-01-23T01:06:49.711343126Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166822" Jan 23 01:06:49.713919 containerd[1689]: time="2026-01-23T01:06:49.713895615Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:49.717408 containerd[1689]: time="2026-01-23T01:06:49.717376209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:06:49.717995 containerd[1689]: time="2026-01-23T01:06:49.717972951Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.375509005s" Jan 23 01:06:49.718034 containerd[1689]: time="2026-01-23T01:06:49.718002920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 01:06:51.472583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 01:06:51.473879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:51.889959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:51.898335 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:06:51.938161 kubelet[2638]: E0123 01:06:51.937255 2638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:06:51.939612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:06:51.939710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:06:51.940172 systemd[1]: kubelet.service: Consumed 142ms CPU time, 109.2M memory peak. Jan 23 01:06:52.389831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:52.389955 systemd[1]: kubelet.service: Consumed 142ms CPU time, 109.2M memory peak. Jan 23 01:06:52.391668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:52.414806 systemd[1]: Reload requested from client PID 2655 ('systemctl') (unit session-9.scope)... Jan 23 01:06:52.414909 systemd[1]: Reloading... Jan 23 01:06:52.508158 zram_generator::config[2714]: No configuration found. Jan 23 01:06:52.711776 systemd[1]: Reloading finished in 296 ms. Jan 23 01:06:52.738115 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:06:52.738225 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:06:52.738573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:52.738637 systemd[1]: kubelet.service: Consumed 65ms CPU time, 71.6M memory peak. Jan 23 01:06:52.742307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:53.896091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:53.901510 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:06:53.936982 kubelet[2769]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:06:53.936982 kubelet[2769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:06:53.936982 kubelet[2769]: I0123 01:06:53.936422 2769 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:06:54.539263 kubelet[2769]: I0123 01:06:54.539163 2769 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 01:06:54.539263 kubelet[2769]: I0123 01:06:54.539186 2769 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:06:54.540184 kubelet[2769]: I0123 01:06:54.540114 2769 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 01:06:54.540184 kubelet[2769]: I0123 01:06:54.540146 2769 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:06:54.540355 kubelet[2769]: I0123 01:06:54.540340 2769 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:06:54.548037 kubelet[2769]: E0123 01:06:54.548006 2769 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:06:54.548384 kubelet[2769]: I0123 01:06:54.548364 2769 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:06:54.551285 kubelet[2769]: I0123 01:06:54.551269 2769 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:06:54.553117 kubelet[2769]: I0123 01:06:54.553099 2769 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 01:06:54.553293 kubelet[2769]: I0123 01:06:54.553273 2769 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:06:54.553415 kubelet[2769]: I0123 01:06:54.553293 2769 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-059e17308a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:06:54.553524 kubelet[2769]: I0123 01:06:54.553419 2769 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:06:54.553524 kubelet[2769]: I0123 01:06:54.553427 2769 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 01:06:54.553524 kubelet[2769]: I0123 01:06:54.553500 2769 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 01:06:54.557476 kubelet[2769]: I0123 01:06:54.557461 2769 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:06:54.557598 kubelet[2769]: I0123 01:06:54.557588 2769 kubelet.go:475] "Attempting to sync node with API server" Jan 23 01:06:54.557625 kubelet[2769]: I0123 01:06:54.557603 2769 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:06:54.557625 kubelet[2769]: I0123 01:06:54.557623 2769 kubelet.go:387] "Adding apiserver pod source" Jan 23 01:06:54.557677 kubelet[2769]: I0123 01:06:54.557638 2769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:06:54.562155 kubelet[2769]: E0123 01:06:54.560766 2769 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:06:54.562155 kubelet[2769]: E0123 01:06:54.560845 2769 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-059e17308a&limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:06:54.562155 kubelet[2769]: I0123 01:06:54.561280 2769 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:06:54.562155 kubelet[2769]: I0123 01:06:54.561703 2769 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:06:54.562155 kubelet[2769]: I0123 01:06:54.561729 2769 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 01:06:54.562155 kubelet[2769]: W0123 01:06:54.561768 2769 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:06:54.565041 kubelet[2769]: I0123 01:06:54.565028 2769 server.go:1262] "Started kubelet" Jan 23 01:06:54.570349 kubelet[2769]: I0123 01:06:54.570247 2769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:06:54.572470 kubelet[2769]: E0123 01:06:54.571346 2769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-n-059e17308a.188d36bd217c7ef9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-n-059e17308a,UID:ci-4459.2.2-n-059e17308a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-n-059e17308a,},FirstTimestamp:2026-01-23 01:06:54.564998905 +0000 UTC m=+0.659786740,LastTimestamp:2026-01-23 01:06:54.564998905 +0000 UTC m=+0.659786740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-n-059e17308a,}" Jan 23 01:06:54.575527 kubelet[2769]: I0123 01:06:54.575512 2769 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 01:06:54.576293 kubelet[2769]: I0123 01:06:54.575630 2769 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 01:06:54.576293 kubelet[2769]: E0123 01:06:54.575747 2769 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-059e17308a\" not found" Jan 23 01:06:54.576376 kubelet[2769]: I0123 01:06:54.576328 2769 reconciler.go:29] "Reconciler: start to sync state" Jan 23 01:06:54.576641 kubelet[2769]: I0123 01:06:54.576272 2769 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:06:54.577480 kubelet[2769]: I0123 01:06:54.577249 2769 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:06:54.577480 kubelet[2769]: I0123 01:06:54.577306 2769 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 01:06:54.577555 kubelet[2769]: I0123 01:06:54.577525 2769 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:06:54.578330 kubelet[2769]: I0123 01:06:54.578316 2769 server.go:310] "Adding debug handlers to kubelet server" Jan 23 01:06:54.578902 kubelet[2769]: I0123 01:06:54.578884 2769 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:06:54.579335 kubelet[2769]: E0123 01:06:54.579317 2769 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:06:54.579460 kubelet[2769]: E0123 01:06:54.579439 2769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-059e17308a?timeout=10s\": dial tcp 10.200.8.21:6443: connect: connection refused" interval="200ms" Jan 23 01:06:54.579960 kubelet[2769]: I0123 01:06:54.579942 2769 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:06:54.580011 kubelet[2769]: I0123 01:06:54.580001 2769 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:06:54.581473 kubelet[2769]: I0123 01:06:54.581452 2769 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:06:54.596306 kubelet[2769]: I0123 01:06:54.596289 2769 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 01:06:54.597083 kubelet[2769]: I0123 01:06:54.597071 2769 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 01:06:54.597156 kubelet[2769]: I0123 01:06:54.597150 2769 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 01:06:54.597198 kubelet[2769]: I0123 01:06:54.597195 2769 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 01:06:54.597250 kubelet[2769]: E0123 01:06:54.597242 2769 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:06:54.602083 kubelet[2769]: E0123 01:06:54.602062 2769 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:06:54.604383 kubelet[2769]: E0123 01:06:54.604365 2769 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:06:54.607685 kubelet[2769]: I0123 01:06:54.607672 2769 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:06:54.607685 kubelet[2769]: I0123 01:06:54.607684 2769 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:06:54.607772 kubelet[2769]: I0123 01:06:54.607697 2769 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:06:54.611666 kubelet[2769]: I0123 01:06:54.611657 2769 policy_none.go:49] "None policy: Start" Jan 23 01:06:54.611791 kubelet[2769]: I0123 01:06:54.611722 2769 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 01:06:54.611791 kubelet[2769]: I0123 01:06:54.611730 2769 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 01:06:54.615073 kubelet[2769]: I0123 01:06:54.615067 2769 policy_none.go:47] "Start" Jan 23 01:06:54.618161 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:06:54.626735 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:06:54.629028 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:06:54.636616 kubelet[2769]: E0123 01:06:54.636596 2769 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:06:54.637097 kubelet[2769]: I0123 01:06:54.637064 2769 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:06:54.637240 kubelet[2769]: I0123 01:06:54.637077 2769 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:06:54.637406 kubelet[2769]: I0123 01:06:54.637353 2769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:06:54.638347 kubelet[2769]: E0123 01:06:54.638331 2769 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:06:54.638399 kubelet[2769]: E0123 01:06:54.638370 2769 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-n-059e17308a\" not found" Jan 23 01:06:54.707090 systemd[1]: Created slice kubepods-burstable-pod787b332d1014434156941933d7379471.slice - libcontainer container kubepods-burstable-pod787b332d1014434156941933d7379471.slice. Jan 23 01:06:54.712698 kubelet[2769]: E0123 01:06:54.712651 2769 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-059e17308a\" not found" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.716434 systemd[1]: Created slice kubepods-burstable-pod78b22c2b242e0fd3123ad0b6fa47b329.slice - libcontainer container kubepods-burstable-pod78b22c2b242e0fd3123ad0b6fa47b329.slice. Jan 23 01:06:54.731952 kubelet[2769]: E0123 01:06:54.731937 2769 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-059e17308a\" not found" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.733838 systemd[1]: Created slice kubepods-burstable-pod092d25410122a8a59f99618aa9da61af.slice - libcontainer container kubepods-burstable-pod092d25410122a8a59f99618aa9da61af.slice. Jan 23 01:06:54.735625 kubelet[2769]: E0123 01:06:54.735606 2769 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-059e17308a\" not found" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.738622 kubelet[2769]: I0123 01:06:54.738607 2769 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.738853 kubelet[2769]: E0123 01:06:54.738834 2769 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.21:6443/api/v1/nodes\": dial tcp 10.200.8.21:6443: connect: connection refused" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.780323 kubelet[2769]: E0123 01:06:54.780299 2769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-059e17308a?timeout=10s\": dial tcp 10.200.8.21:6443: connect: connection refused" interval="400ms" Jan 23 01:06:54.877715 kubelet[2769]: I0123 01:06:54.877577 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/787b332d1014434156941933d7379471-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-059e17308a\" (UID: \"787b332d1014434156941933d7379471\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.877715 kubelet[2769]: I0123 01:06:54.877621 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78b22c2b242e0fd3123ad0b6fa47b329-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" (UID: \"78b22c2b242e0fd3123ad0b6fa47b329\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.877715 kubelet[2769]: I0123 01:06:54.877647 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78b22c2b242e0fd3123ad0b6fa47b329-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" (UID: \"78b22c2b242e0fd3123ad0b6fa47b329\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.877715 kubelet[2769]: I0123 01:06:54.877669 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78b22c2b242e0fd3123ad0b6fa47b329-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" (UID: \"78b22c2b242e0fd3123ad0b6fa47b329\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.877873 kubelet[2769]: I0123 01:06:54.877787 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78b22c2b242e0fd3123ad0b6fa47b329-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" (UID: \"78b22c2b242e0fd3123ad0b6fa47b329\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.877873 kubelet[2769]: I0123 01:06:54.877805 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78b22c2b242e0fd3123ad0b6fa47b329-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" (UID: \"78b22c2b242e0fd3123ad0b6fa47b329\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.877873 kubelet[2769]: I0123 01:06:54.877824 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/092d25410122a8a59f99618aa9da61af-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-059e17308a\" (UID: \"092d25410122a8a59f99618aa9da61af\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.878215 kubelet[2769]: I0123 01:06:54.878151 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/787b332d1014434156941933d7379471-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-059e17308a\" (UID: \"787b332d1014434156941933d7379471\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.878215 kubelet[2769]: I0123 01:06:54.878172 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/787b332d1014434156941933d7379471-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-059e17308a\" (UID: \"787b332d1014434156941933d7379471\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.940680 kubelet[2769]: I0123 01:06:54.940661 2769 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:54.941013 kubelet[2769]: E0123 01:06:54.940896 2769 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.21:6443/api/v1/nodes\": dial tcp 10.200.8.21:6443: connect: connection refused" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:55.021025 containerd[1689]: time="2026-01-23T01:06:55.019020790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-059e17308a,Uid:787b332d1014434156941933d7379471,Namespace:kube-system,Attempt:0,}" Jan 23 01:06:55.036175 containerd[1689]: time="2026-01-23T01:06:55.036122311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-059e17308a,Uid:78b22c2b242e0fd3123ad0b6fa47b329,Namespace:kube-system,Attempt:0,}" Jan 23 01:06:55.039771 containerd[1689]: time="2026-01-23T01:06:55.039704896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-059e17308a,Uid:092d25410122a8a59f99618aa9da61af,Namespace:kube-system,Attempt:0,}" Jan 23 01:06:55.181068 kubelet[2769]: E0123 01:06:55.180992 2769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-059e17308a?timeout=10s\": dial tcp 10.200.8.21:6443: connect: connection refused" interval="800ms" Jan 23 01:06:55.342771 kubelet[2769]: I0123 01:06:55.342748 2769 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:55.342996 kubelet[2769]: E0123 01:06:55.342978 2769 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.21:6443/api/v1/nodes\": dial tcp 10.200.8.21:6443: connect: connection refused" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:55.398654 kubelet[2769]: E0123 01:06:55.398630 2769 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-059e17308a&limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:06:55.407947 kubelet[2769]: E0123 01:06:55.407924 2769 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:06:55.501023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861053634.mount: Deactivated successfully. Jan 23 01:06:55.518744 containerd[1689]: time="2026-01-23T01:06:55.518711036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:06:55.527881 containerd[1689]: time="2026-01-23T01:06:55.527799836Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 23 01:06:55.530318 containerd[1689]: time="2026-01-23T01:06:55.530293000Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:06:55.533014 containerd[1689]: time="2026-01-23T01:06:55.532987769Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:06:55.537953 containerd[1689]: time="2026-01-23T01:06:55.537399454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 01:06:55.540935 containerd[1689]: time="2026-01-23T01:06:55.540913278Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:06:55.544269 containerd[1689]: time="2026-01-23T01:06:55.544241199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:06:55.544668 containerd[1689]: time="2026-01-23T01:06:55.544647205Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 500.593825ms" Jan 23 01:06:55.546244 containerd[1689]: time="2026-01-23T01:06:55.546221753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 01:06:55.549022 containerd[1689]: time="2026-01-23T01:06:55.548997235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 526.558717ms" Jan 23 01:06:55.566293 containerd[1689]: time="2026-01-23T01:06:55.566266402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 526.940374ms" Jan 23 01:06:55.582282 containerd[1689]: time="2026-01-23T01:06:55.582243240Z" level=info msg="connecting to shim b674cadfdaa77735c250e6cffa9c8ea9b18c795745f2755461d28ee09f796531" address="unix:///run/containerd/s/f8a42727f06d97d7e413257d04671abbe6ea502e28ec7ab357666b5d8e6993c2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:06:55.602145 containerd[1689]: time="2026-01-23T01:06:55.601876405Z" level=info msg="connecting to shim 79713b07b86118ddbb8a64268429bf3ee92ccf3844fb91f5368c988e6d4848e1" address="unix:///run/containerd/s/a12a7c31c18ed13b5728cc08c9601bee0cc116e49bc1befdd9a5f2e67193913d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:06:55.602343 systemd[1]: Started cri-containerd-b674cadfdaa77735c250e6cffa9c8ea9b18c795745f2755461d28ee09f796531.scope - libcontainer container b674cadfdaa77735c250e6cffa9c8ea9b18c795745f2755461d28ee09f796531. Jan 23 01:06:55.622281 containerd[1689]: time="2026-01-23T01:06:55.622172568Z" level=info msg="connecting to shim e806a0102e7a1671605f50b190d74231830b719a1853ab902fda7f66174e023a" address="unix:///run/containerd/s/3d53565ef7cd8bb2841e37b64495ac4c5e57b3d9218c23aa1e78792cc7950790" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:06:55.636442 systemd[1]: Started cri-containerd-79713b07b86118ddbb8a64268429bf3ee92ccf3844fb91f5368c988e6d4848e1.scope - libcontainer container 79713b07b86118ddbb8a64268429bf3ee92ccf3844fb91f5368c988e6d4848e1. Jan 23 01:06:55.641141 systemd[1]: Started cri-containerd-e806a0102e7a1671605f50b190d74231830b719a1853ab902fda7f66174e023a.scope - libcontainer container e806a0102e7a1671605f50b190d74231830b719a1853ab902fda7f66174e023a. Jan 23 01:06:55.670146 containerd[1689]: time="2026-01-23T01:06:55.668724813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-059e17308a,Uid:092d25410122a8a59f99618aa9da61af,Namespace:kube-system,Attempt:0,} returns sandbox id \"b674cadfdaa77735c250e6cffa9c8ea9b18c795745f2755461d28ee09f796531\"" Jan 23 01:06:55.679173 containerd[1689]: time="2026-01-23T01:06:55.679153252Z" level=info msg="CreateContainer within sandbox \"b674cadfdaa77735c250e6cffa9c8ea9b18c795745f2755461d28ee09f796531\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:06:55.703148 containerd[1689]: time="2026-01-23T01:06:55.703063012Z" level=info msg="Container 312d58a7c47678f7fc863995e5045147dd6ac628368423df2ea080931dc15626: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:06:55.705449 containerd[1689]: time="2026-01-23T01:06:55.705422132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-059e17308a,Uid:787b332d1014434156941933d7379471,Namespace:kube-system,Attempt:0,} returns sandbox id \"79713b07b86118ddbb8a64268429bf3ee92ccf3844fb91f5368c988e6d4848e1\"" Jan 23 01:06:55.711178 containerd[1689]: time="2026-01-23T01:06:55.711146060Z" level=info msg="CreateContainer within sandbox \"79713b07b86118ddbb8a64268429bf3ee92ccf3844fb91f5368c988e6d4848e1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:06:55.713470 containerd[1689]: time="2026-01-23T01:06:55.713448509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-059e17308a,Uid:78b22c2b242e0fd3123ad0b6fa47b329,Namespace:kube-system,Attempt:0,} returns sandbox id \"e806a0102e7a1671605f50b190d74231830b719a1853ab902fda7f66174e023a\"" Jan 23 01:06:55.715322 containerd[1689]: time="2026-01-23T01:06:55.715299020Z" level=info msg="CreateContainer within sandbox \"b674cadfdaa77735c250e6cffa9c8ea9b18c795745f2755461d28ee09f796531\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"312d58a7c47678f7fc863995e5045147dd6ac628368423df2ea080931dc15626\"" Jan 23 01:06:55.715719 containerd[1689]: time="2026-01-23T01:06:55.715702599Z" level=info msg="StartContainer for \"312d58a7c47678f7fc863995e5045147dd6ac628368423df2ea080931dc15626\"" Jan 23 01:06:55.718050 containerd[1689]: time="2026-01-23T01:06:55.717769217Z" level=info msg="connecting to shim 312d58a7c47678f7fc863995e5045147dd6ac628368423df2ea080931dc15626" address="unix:///run/containerd/s/f8a42727f06d97d7e413257d04671abbe6ea502e28ec7ab357666b5d8e6993c2" protocol=ttrpc version=3 Jan 23 01:06:55.719334 containerd[1689]: time="2026-01-23T01:06:55.719310222Z" level=info msg="CreateContainer within sandbox \"e806a0102e7a1671605f50b190d74231830b719a1853ab902fda7f66174e023a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:06:55.729789 containerd[1689]: time="2026-01-23T01:06:55.729771242Z" level=info msg="Container eeec419034f8041bd7fb8dff151992ce29c93213daaeca64042e527b89ec7a11: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:06:55.731270 systemd[1]: Started cri-containerd-312d58a7c47678f7fc863995e5045147dd6ac628368423df2ea080931dc15626.scope - libcontainer container 312d58a7c47678f7fc863995e5045147dd6ac628368423df2ea080931dc15626. Jan 23 01:06:55.748290 containerd[1689]: time="2026-01-23T01:06:55.748052968Z" level=info msg="CreateContainer within sandbox \"79713b07b86118ddbb8a64268429bf3ee92ccf3844fb91f5368c988e6d4848e1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eeec419034f8041bd7fb8dff151992ce29c93213daaeca64042e527b89ec7a11\"" Jan 23 01:06:55.749508 containerd[1689]: time="2026-01-23T01:06:55.749420370Z" level=info msg="StartContainer for \"eeec419034f8041bd7fb8dff151992ce29c93213daaeca64042e527b89ec7a11\"" Jan 23 01:06:55.750640 containerd[1689]: time="2026-01-23T01:06:55.750531183Z" level=info msg="connecting to shim eeec419034f8041bd7fb8dff151992ce29c93213daaeca64042e527b89ec7a11" address="unix:///run/containerd/s/a12a7c31c18ed13b5728cc08c9601bee0cc116e49bc1befdd9a5f2e67193913d" protocol=ttrpc version=3 Jan 23 01:06:55.754326 containerd[1689]: time="2026-01-23T01:06:55.754269345Z" level=info msg="Container a9842d711b1a69c1639e8ff3e78213be372210db811835a198074f8a24c16658: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:06:55.769916 containerd[1689]: time="2026-01-23T01:06:55.769597229Z" level=info msg="CreateContainer within sandbox \"e806a0102e7a1671605f50b190d74231830b719a1853ab902fda7f66174e023a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a9842d711b1a69c1639e8ff3e78213be372210db811835a198074f8a24c16658\"" Jan 23 01:06:55.770110 containerd[1689]: time="2026-01-23T01:06:55.770089313Z" level=info msg="StartContainer for \"a9842d711b1a69c1639e8ff3e78213be372210db811835a198074f8a24c16658\"" Jan 23 01:06:55.771660 containerd[1689]: time="2026-01-23T01:06:55.771336420Z" level=info msg="connecting to shim a9842d711b1a69c1639e8ff3e78213be372210db811835a198074f8a24c16658" address="unix:///run/containerd/s/3d53565ef7cd8bb2841e37b64495ac4c5e57b3d9218c23aa1e78792cc7950790" protocol=ttrpc version=3 Jan 23 01:06:55.771489 systemd[1]: Started cri-containerd-eeec419034f8041bd7fb8dff151992ce29c93213daaeca64042e527b89ec7a11.scope - libcontainer container eeec419034f8041bd7fb8dff151992ce29c93213daaeca64042e527b89ec7a11. Jan 23 01:06:55.788756 containerd[1689]: time="2026-01-23T01:06:55.788711847Z" level=info msg="StartContainer for \"312d58a7c47678f7fc863995e5045147dd6ac628368423df2ea080931dc15626\" returns successfully" Jan 23 01:06:55.800325 systemd[1]: Started cri-containerd-a9842d711b1a69c1639e8ff3e78213be372210db811835a198074f8a24c16658.scope - libcontainer container a9842d711b1a69c1639e8ff3e78213be372210db811835a198074f8a24c16658. Jan 23 01:06:55.841559 containerd[1689]: time="2026-01-23T01:06:55.841536311Z" level=info msg="StartContainer for \"eeec419034f8041bd7fb8dff151992ce29c93213daaeca64042e527b89ec7a11\" returns successfully" Jan 23 01:06:55.866569 containerd[1689]: time="2026-01-23T01:06:55.866549720Z" level=info msg="StartContainer for \"a9842d711b1a69c1639e8ff3e78213be372210db811835a198074f8a24c16658\" returns successfully" Jan 23 01:06:56.147150 kubelet[2769]: I0123 01:06:56.145305 2769 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:56.613430 kubelet[2769]: E0123 01:06:56.613234 2769 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-059e17308a\" not found" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:56.617226 kubelet[2769]: E0123 01:06:56.616775 2769 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-059e17308a\" not found" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:56.621443 kubelet[2769]: E0123 01:06:56.621301 2769 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-059e17308a\" not found" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:57.558014 kubelet[2769]: I0123 01:06:57.557973 2769 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:57.558014 kubelet[2769]: E0123 01:06:57.558015 2769 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-n-059e17308a\": node \"ci-4459.2.2-n-059e17308a\" not found" Jan 23 01:06:57.590410 kubelet[2769]: E0123 01:06:57.590374 2769 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-059e17308a\" not found" Jan 23 01:06:57.623348 kubelet[2769]: E0123 01:06:57.623325 2769 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-059e17308a\" not found" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:57.623505 kubelet[2769]: E0123 01:06:57.623495 2769 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-059e17308a\" not found" node="ci-4459.2.2-n-059e17308a" Jan 23 01:06:57.676616 kubelet[2769]: I0123 01:06:57.676585 2769 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:06:57.683008 kubelet[2769]: E0123 01:06:57.682975 2769 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-059e17308a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:06:57.683922 kubelet[2769]: I0123 01:06:57.683896 2769 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:06:57.685292 kubelet[2769]: E0123 01:06:57.685227 2769 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:06:57.685292 kubelet[2769]: I0123 01:06:57.685258 2769 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-059e17308a" Jan 23 01:06:57.686496 kubelet[2769]: E0123 01:06:57.686449 2769 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-059e17308a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-n-059e17308a" Jan 23 01:06:58.563209 kubelet[2769]: I0123 01:06:58.563170 2769 apiserver.go:52] "Watching apiserver" Jan 23 01:06:58.577192 kubelet[2769]: I0123 01:06:58.577165 2769 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 01:06:59.847257 systemd[1]: Reload requested from client PID 3053 ('systemctl') (unit session-9.scope)... Jan 23 01:06:59.847272 systemd[1]: Reloading... Jan 23 01:06:59.926154 zram_generator::config[3100]: No configuration found. Jan 23 01:07:00.111935 systemd[1]: Reloading finished in 264 ms. Jan 23 01:07:00.139743 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:00.156014 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:07:00.156283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:00.156331 systemd[1]: kubelet.service: Consumed 934ms CPU time, 122.5M memory peak. Jan 23 01:07:00.157755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:01.468972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:01.477376 (kubelet)[3167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:07:01.518151 kubelet[3167]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:07:01.518151 kubelet[3167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:07:01.518151 kubelet[3167]: I0123 01:07:01.517621 3167 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:07:01.521857 kubelet[3167]: I0123 01:07:01.521841 3167 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 01:07:01.521941 kubelet[3167]: I0123 01:07:01.521934 3167 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:07:01.521981 kubelet[3167]: I0123 01:07:01.521978 3167 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 01:07:01.522009 kubelet[3167]: I0123 01:07:01.522004 3167 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:07:01.522207 kubelet[3167]: I0123 01:07:01.522202 3167 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:07:01.523242 kubelet[3167]: I0123 01:07:01.523218 3167 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 01:07:01.525954 kubelet[3167]: I0123 01:07:01.525934 3167 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:07:01.530655 kubelet[3167]: I0123 01:07:01.530642 3167 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:07:01.532944 kubelet[3167]: I0123 01:07:01.532927 3167 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 01:07:01.533098 kubelet[3167]: I0123 01:07:01.533082 3167 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:07:01.533229 kubelet[3167]: I0123 01:07:01.533098 3167 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-059e17308a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:07:01.533314 kubelet[3167]: I0123 01:07:01.533233 3167 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:07:01.533314 kubelet[3167]: I0123 01:07:01.533241 3167 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 01:07:01.533314 kubelet[3167]: I0123 01:07:01.533259 3167 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 01:07:01.534163 kubelet[3167]: I0123 01:07:01.533807 3167 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:07:01.534163 kubelet[3167]: I0123 01:07:01.533894 3167 kubelet.go:475] "Attempting to sync node with API server" Jan 23 01:07:01.534163 kubelet[3167]: I0123 01:07:01.533901 3167 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:07:01.534163 kubelet[3167]: I0123 01:07:01.533917 3167 kubelet.go:387] "Adding apiserver pod source" Jan 23 01:07:01.534163 kubelet[3167]: I0123 01:07:01.533928 3167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:07:01.539086 kubelet[3167]: I0123 01:07:01.539034 3167 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:07:01.541081 kubelet[3167]: I0123 01:07:01.540981 3167 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:07:01.541384 kubelet[3167]: I0123 01:07:01.541374 3167 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 01:07:01.544860 kubelet[3167]: I0123 01:07:01.544850 3167 server.go:1262] "Started kubelet" Jan 23 01:07:01.545462 kubelet[3167]: I0123 01:07:01.545443 3167 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:07:01.545528 kubelet[3167]: I0123 01:07:01.545476 3167 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 01:07:01.545657 kubelet[3167]: I0123 01:07:01.545646 3167 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:07:01.545707 kubelet[3167]: I0123 01:07:01.545696 3167 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:07:01.545775 kubelet[3167]: I0123 01:07:01.545770 3167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:07:01.549252 kubelet[3167]: I0123 01:07:01.549236 3167 server.go:310] "Adding debug handlers to kubelet server" Jan 23 01:07:01.555216 kubelet[3167]: I0123 01:07:01.555200 3167 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:07:01.556971 kubelet[3167]: I0123 01:07:01.556954 3167 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 01:07:01.557046 kubelet[3167]: I0123 01:07:01.557036 3167 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 01:07:01.557167 kubelet[3167]: I0123 01:07:01.557157 3167 reconciler.go:29] "Reconciler: start to sync state" Jan 23 01:07:01.558175 kubelet[3167]: I0123 01:07:01.558159 3167 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:07:01.558253 kubelet[3167]: I0123 01:07:01.558226 3167 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:07:01.560049 kubelet[3167]: E0123 01:07:01.560017 3167 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:07:01.566965 kubelet[3167]: I0123 01:07:01.566522 3167 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:07:01.570933 kubelet[3167]: I0123 01:07:01.570769 3167 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 01:07:01.571947 kubelet[3167]: I0123 01:07:01.571923 3167 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 01:07:01.571947 kubelet[3167]: I0123 01:07:01.571948 3167 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 01:07:01.572031 kubelet[3167]: I0123 01:07:01.571964 3167 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 01:07:01.572031 kubelet[3167]: E0123 01:07:01.572004 3167 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:07:01.622449 kubelet[3167]: I0123 01:07:01.622438 3167 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:07:01.622577 kubelet[3167]: I0123 01:07:01.622531 3167 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:07:01.622577 kubelet[3167]: I0123 01:07:01.622546 3167 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:07:01.623200 kubelet[3167]: I0123 01:07:01.622990 3167 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:07:01.623200 kubelet[3167]: I0123 01:07:01.623000 3167 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:07:01.623200 kubelet[3167]: I0123 01:07:01.623013 3167 policy_none.go:49] "None policy: Start" Jan 23 01:07:01.623200 kubelet[3167]: I0123 01:07:01.623023 3167 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 01:07:01.623200 kubelet[3167]: I0123 01:07:01.623032 3167 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 01:07:01.623200 kubelet[3167]: I0123 01:07:01.623108 3167 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 01:07:01.623200 kubelet[3167]: I0123 01:07:01.623114 3167 policy_none.go:47] "Start" Jan 23 01:07:01.626941 kubelet[3167]: E0123 01:07:01.626924 3167 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:07:01.627043 kubelet[3167]: I0123 01:07:01.627034 3167 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:07:01.627673 kubelet[3167]: I0123 01:07:01.627047 3167 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:07:01.628618 kubelet[3167]: I0123 01:07:01.628041 3167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:07:01.631024 kubelet[3167]: E0123 01:07:01.630009 3167 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:07:01.673651 kubelet[3167]: I0123 01:07:01.672752 3167 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.673651 kubelet[3167]: I0123 01:07:01.673027 3167 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.674959 kubelet[3167]: I0123 01:07:01.674884 3167 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.682020 kubelet[3167]: I0123 01:07:01.681114 3167 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 01:07:01.684646 kubelet[3167]: I0123 01:07:01.684627 3167 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 01:07:01.685924 kubelet[3167]: I0123 01:07:01.685108 3167 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 01:07:01.730507 kubelet[3167]: I0123 01:07:01.730382 3167 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.745672 kubelet[3167]: I0123 01:07:01.745118 3167 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.745672 kubelet[3167]: I0123 01:07:01.745181 3167 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.757482 kubelet[3167]: I0123 01:07:01.757456 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/787b332d1014434156941933d7379471-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-059e17308a\" (UID: \"787b332d1014434156941933d7379471\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.757548 kubelet[3167]: I0123 01:07:01.757481 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/787b332d1014434156941933d7379471-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-059e17308a\" (UID: \"787b332d1014434156941933d7379471\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.857996 kubelet[3167]: I0123 01:07:01.857915 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78b22c2b242e0fd3123ad0b6fa47b329-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" (UID: \"78b22c2b242e0fd3123ad0b6fa47b329\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.858149 kubelet[3167]: I0123 01:07:01.858097 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78b22c2b242e0fd3123ad0b6fa47b329-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" (UID: \"78b22c2b242e0fd3123ad0b6fa47b329\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.858202 kubelet[3167]: I0123 01:07:01.858122 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78b22c2b242e0fd3123ad0b6fa47b329-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" (UID: \"78b22c2b242e0fd3123ad0b6fa47b329\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.858258 kubelet[3167]: I0123 01:07:01.858249 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78b22c2b242e0fd3123ad0b6fa47b329-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" (UID: \"78b22c2b242e0fd3123ad0b6fa47b329\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.858373 kubelet[3167]: I0123 01:07:01.858311 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/092d25410122a8a59f99618aa9da61af-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-059e17308a\" (UID: \"092d25410122a8a59f99618aa9da61af\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.858373 kubelet[3167]: I0123 01:07:01.858359 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/78b22c2b242e0fd3123ad0b6fa47b329-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-059e17308a\" (UID: \"78b22c2b242e0fd3123ad0b6fa47b329\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" Jan 23 01:07:01.858455 kubelet[3167]: I0123 01:07:01.858446 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/787b332d1014434156941933d7379471-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-059e17308a\" (UID: \"787b332d1014434156941933d7379471\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:07:02.539420 kubelet[3167]: I0123 01:07:02.539389 3167 apiserver.go:52] "Watching apiserver" Jan 23 01:07:02.557530 kubelet[3167]: I0123 01:07:02.557499 3167 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 01:07:02.609643 kubelet[3167]: I0123 01:07:02.609617 3167 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:07:02.616856 kubelet[3167]: I0123 01:07:02.616829 3167 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 01:07:02.616946 kubelet[3167]: E0123 01:07:02.616878 3167 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-059e17308a\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" Jan 23 01:07:02.650603 kubelet[3167]: I0123 01:07:02.650407 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-n-059e17308a" podStartSLOduration=1.6503936540000002 podStartE2EDuration="1.650393654s" podCreationTimestamp="2026-01-23 01:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:02.635398558 +0000 UTC m=+1.154856625" watchObservedRunningTime="2026-01-23 01:07:02.650393654 +0000 UTC m=+1.169851712" Jan 23 01:07:02.650603 kubelet[3167]: I0123 01:07:02.650512 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-059e17308a" podStartSLOduration=1.650507296 podStartE2EDuration="1.650507296s" podCreationTimestamp="2026-01-23 01:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:02.650290146 +0000 UTC m=+1.169748212" watchObservedRunningTime="2026-01-23 01:07:02.650507296 +0000 UTC m=+1.169965368" Jan 23 01:07:02.695995 kubelet[3167]: I0123 01:07:02.695949 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-n-059e17308a" podStartSLOduration=1.695936703 podStartE2EDuration="1.695936703s" podCreationTimestamp="2026-01-23 01:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:02.668463086 +0000 UTC m=+1.187921156" watchObservedRunningTime="2026-01-23 01:07:02.695936703 +0000 UTC m=+1.215394771" Jan 23 01:07:06.957251 kubelet[3167]: I0123 01:07:06.957222 3167 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:07:06.957587 containerd[1689]: time="2026-01-23T01:07:06.957514144Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:07:06.957763 kubelet[3167]: I0123 01:07:06.957720 3167 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:07:08.016987 systemd[1]: Created slice kubepods-besteffort-podb252d191_3b74_4450_a48e_2937e6ebd510.slice - libcontainer container kubepods-besteffort-podb252d191_3b74_4450_a48e_2937e6ebd510.slice. Jan 23 01:07:08.103357 kubelet[3167]: I0123 01:07:08.103227 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b252d191-3b74-4450-a48e-2937e6ebd510-lib-modules\") pod \"kube-proxy-74f42\" (UID: \"b252d191-3b74-4450-a48e-2937e6ebd510\") " pod="kube-system/kube-proxy-74f42" Jan 23 01:07:08.103672 kubelet[3167]: I0123 01:07:08.103370 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b252d191-3b74-4450-a48e-2937e6ebd510-kube-proxy\") pod \"kube-proxy-74f42\" (UID: \"b252d191-3b74-4450-a48e-2937e6ebd510\") " pod="kube-system/kube-proxy-74f42" Jan 23 01:07:08.103672 kubelet[3167]: I0123 01:07:08.103502 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw92h\" (UniqueName: \"kubernetes.io/projected/b252d191-3b74-4450-a48e-2937e6ebd510-kube-api-access-nw92h\") pod \"kube-proxy-74f42\" (UID: \"b252d191-3b74-4450-a48e-2937e6ebd510\") " pod="kube-system/kube-proxy-74f42" Jan 23 01:07:08.103672 kubelet[3167]: I0123 01:07:08.103524 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b252d191-3b74-4450-a48e-2937e6ebd510-xtables-lock\") pod \"kube-proxy-74f42\" (UID: \"b252d191-3b74-4450-a48e-2937e6ebd510\") " pod="kube-system/kube-proxy-74f42" Jan 23 01:07:08.146463 systemd[1]: Created slice kubepods-besteffort-pod8657a1a7_1ce3_4734_afe8_bec83e0e9e9c.slice - libcontainer container kubepods-besteffort-pod8657a1a7_1ce3_4734_afe8_bec83e0e9e9c.slice. Jan 23 01:07:08.204321 kubelet[3167]: I0123 01:07:08.204302 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8657a1a7-1ce3-4734-afe8-bec83e0e9e9c-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-vnqmr\" (UID: \"8657a1a7-1ce3-4734-afe8-bec83e0e9e9c\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vnqmr" Jan 23 01:07:08.204711 kubelet[3167]: I0123 01:07:08.204339 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqmsp\" (UniqueName: \"kubernetes.io/projected/8657a1a7-1ce3-4734-afe8-bec83e0e9e9c-kube-api-access-bqmsp\") pod \"tigera-operator-65cdcdfd6d-vnqmr\" (UID: \"8657a1a7-1ce3-4734-afe8-bec83e0e9e9c\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vnqmr" Jan 23 01:07:08.329179 containerd[1689]: time="2026-01-23T01:07:08.329078698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-74f42,Uid:b252d191-3b74-4450-a48e-2937e6ebd510,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:08.369457 containerd[1689]: time="2026-01-23T01:07:08.369426920Z" level=info msg="connecting to shim 560ac49ff054fca09f46087d43b73e6457ba861b634a10747de83c6bd80d0dbc" address="unix:///run/containerd/s/515bb450ff7a71d980fe2f16ad11132e4c1c05d5a0521a03ad496ee5f2e7ca07" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:08.392279 systemd[1]: Started cri-containerd-560ac49ff054fca09f46087d43b73e6457ba861b634a10747de83c6bd80d0dbc.scope - libcontainer container 560ac49ff054fca09f46087d43b73e6457ba861b634a10747de83c6bd80d0dbc. Jan 23 01:07:08.412089 containerd[1689]: time="2026-01-23T01:07:08.412033793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-74f42,Uid:b252d191-3b74-4450-a48e-2937e6ebd510,Namespace:kube-system,Attempt:0,} returns sandbox id \"560ac49ff054fca09f46087d43b73e6457ba861b634a10747de83c6bd80d0dbc\"" Jan 23 01:07:08.420863 containerd[1689]: time="2026-01-23T01:07:08.420839710Z" level=info msg="CreateContainer within sandbox \"560ac49ff054fca09f46087d43b73e6457ba861b634a10747de83c6bd80d0dbc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:07:08.439040 containerd[1689]: time="2026-01-23T01:07:08.438517734Z" level=info msg="Container 4ce150f621d25517890c2bdcc7e8aaf434da2c7a33b6a1218deaf8375d32e3c6: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:08.456012 containerd[1689]: time="2026-01-23T01:07:08.455988239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vnqmr,Uid:8657a1a7-1ce3-4734-afe8-bec83e0e9e9c,Namespace:tigera-operator,Attempt:0,}" Jan 23 01:07:08.458017 containerd[1689]: time="2026-01-23T01:07:08.457993019Z" level=info msg="CreateContainer within sandbox \"560ac49ff054fca09f46087d43b73e6457ba861b634a10747de83c6bd80d0dbc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4ce150f621d25517890c2bdcc7e8aaf434da2c7a33b6a1218deaf8375d32e3c6\"" Jan 23 01:07:08.459179 containerd[1689]: time="2026-01-23T01:07:08.458410641Z" level=info msg="StartContainer for \"4ce150f621d25517890c2bdcc7e8aaf434da2c7a33b6a1218deaf8375d32e3c6\"" Jan 23 01:07:08.459975 containerd[1689]: time="2026-01-23T01:07:08.459939380Z" level=info msg="connecting to shim 4ce150f621d25517890c2bdcc7e8aaf434da2c7a33b6a1218deaf8375d32e3c6" address="unix:///run/containerd/s/515bb450ff7a71d980fe2f16ad11132e4c1c05d5a0521a03ad496ee5f2e7ca07" protocol=ttrpc version=3 Jan 23 01:07:08.475295 systemd[1]: Started cri-containerd-4ce150f621d25517890c2bdcc7e8aaf434da2c7a33b6a1218deaf8375d32e3c6.scope - libcontainer container 4ce150f621d25517890c2bdcc7e8aaf434da2c7a33b6a1218deaf8375d32e3c6. Jan 23 01:07:08.490944 containerd[1689]: time="2026-01-23T01:07:08.489786157Z" level=info msg="connecting to shim 2fc35ddc3feeba869689a68dae6641e12fda1356be4cb0123e849d4709a8376c" address="unix:///run/containerd/s/0283ccf62582d522008e6cef1a1a091fc3ad1109d25980a6929a2fc11fcf2a3e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:08.508266 systemd[1]: Started cri-containerd-2fc35ddc3feeba869689a68dae6641e12fda1356be4cb0123e849d4709a8376c.scope - libcontainer container 2fc35ddc3feeba869689a68dae6641e12fda1356be4cb0123e849d4709a8376c. Jan 23 01:07:08.537549 containerd[1689]: time="2026-01-23T01:07:08.537530093Z" level=info msg="StartContainer for \"4ce150f621d25517890c2bdcc7e8aaf434da2c7a33b6a1218deaf8375d32e3c6\" returns successfully" Jan 23 01:07:08.572102 containerd[1689]: time="2026-01-23T01:07:08.572075450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vnqmr,Uid:8657a1a7-1ce3-4734-afe8-bec83e0e9e9c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2fc35ddc3feeba869689a68dae6641e12fda1356be4cb0123e849d4709a8376c\"" Jan 23 01:07:08.574418 containerd[1689]: time="2026-01-23T01:07:08.574397635Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 01:07:08.644532 kubelet[3167]: I0123 01:07:08.644429 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-74f42" podStartSLOduration=1.6444127910000002 podStartE2EDuration="1.644412791s" podCreationTimestamp="2026-01-23 01:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:08.633020872 +0000 UTC m=+7.152478948" watchObservedRunningTime="2026-01-23 01:07:08.644412791 +0000 UTC m=+7.163870859" Jan 23 01:07:10.069408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1954630154.mount: Deactivated successfully. Jan 23 01:07:10.477233 containerd[1689]: time="2026-01-23T01:07:10.477196542Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:10.479347 containerd[1689]: time="2026-01-23T01:07:10.479278021Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 01:07:10.481694 containerd[1689]: time="2026-01-23T01:07:10.481671445Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:10.484744 containerd[1689]: time="2026-01-23T01:07:10.484612790Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:10.485209 containerd[1689]: time="2026-01-23T01:07:10.485027244Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.91052227s" Jan 23 01:07:10.485209 containerd[1689]: time="2026-01-23T01:07:10.485053667Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 01:07:10.490576 containerd[1689]: time="2026-01-23T01:07:10.490549191Z" level=info msg="CreateContainer within sandbox \"2fc35ddc3feeba869689a68dae6641e12fda1356be4cb0123e849d4709a8376c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 01:07:10.510049 containerd[1689]: time="2026-01-23T01:07:10.509536883Z" level=info msg="Container 283e2e05ad364f3d7badd80880ddd1d90a1d8922edb94af1abd7a0c1863e50a3: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:10.513339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3905849778.mount: Deactivated successfully. Jan 23 01:07:10.521806 containerd[1689]: time="2026-01-23T01:07:10.521783234Z" level=info msg="CreateContainer within sandbox \"2fc35ddc3feeba869689a68dae6641e12fda1356be4cb0123e849d4709a8376c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"283e2e05ad364f3d7badd80880ddd1d90a1d8922edb94af1abd7a0c1863e50a3\"" Jan 23 01:07:10.522238 containerd[1689]: time="2026-01-23T01:07:10.522217935Z" level=info msg="StartContainer for \"283e2e05ad364f3d7badd80880ddd1d90a1d8922edb94af1abd7a0c1863e50a3\"" Jan 23 01:07:10.523390 containerd[1689]: time="2026-01-23T01:07:10.523307818Z" level=info msg="connecting to shim 283e2e05ad364f3d7badd80880ddd1d90a1d8922edb94af1abd7a0c1863e50a3" address="unix:///run/containerd/s/0283ccf62582d522008e6cef1a1a091fc3ad1109d25980a6929a2fc11fcf2a3e" protocol=ttrpc version=3 Jan 23 01:07:10.543308 systemd[1]: Started cri-containerd-283e2e05ad364f3d7badd80880ddd1d90a1d8922edb94af1abd7a0c1863e50a3.scope - libcontainer container 283e2e05ad364f3d7badd80880ddd1d90a1d8922edb94af1abd7a0c1863e50a3. Jan 23 01:07:10.570948 containerd[1689]: time="2026-01-23T01:07:10.570887536Z" level=info msg="StartContainer for \"283e2e05ad364f3d7badd80880ddd1d90a1d8922edb94af1abd7a0c1863e50a3\" returns successfully" Jan 23 01:07:11.270530 kubelet[3167]: I0123 01:07:11.270468 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-vnqmr" podStartSLOduration=1.358608353 podStartE2EDuration="3.270452766s" podCreationTimestamp="2026-01-23 01:07:08 +0000 UTC" firstStartedPulling="2026-01-23 01:07:08.573896185 +0000 UTC m=+7.093354255" lastFinishedPulling="2026-01-23 01:07:10.485740616 +0000 UTC m=+9.005198668" observedRunningTime="2026-01-23 01:07:10.637880533 +0000 UTC m=+9.157338598" watchObservedRunningTime="2026-01-23 01:07:11.270452766 +0000 UTC m=+9.789910831" Jan 23 01:07:16.083656 sudo[2140]: pam_unix(sudo:session): session closed for user root Jan 23 01:07:16.193454 sshd[2139]: Connection closed by 10.200.16.10 port 48032 Jan 23 01:07:16.192996 sshd-session[2133]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:16.196454 systemd[1]: sshd@6-10.200.8.21:22-10.200.16.10:48032.service: Deactivated successfully. Jan 23 01:07:16.198747 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:07:16.200070 systemd[1]: session-9.scope: Consumed 3.738s CPU time, 231.2M memory peak. Jan 23 01:07:16.204655 systemd-logind[1676]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:07:16.207678 systemd-logind[1676]: Removed session 9. Jan 23 01:07:20.283495 systemd[1]: Created slice kubepods-besteffort-podb0261c7c_d06a_4295_98db_87f2823bd59a.slice - libcontainer container kubepods-besteffort-podb0261c7c_d06a_4295_98db_87f2823bd59a.slice. Jan 23 01:07:20.376149 kubelet[3167]: I0123 01:07:20.375987 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b0261c7c-d06a-4295-98db-87f2823bd59a-typha-certs\") pod \"calico-typha-f674dc9ff-n9lvw\" (UID: \"b0261c7c-d06a-4295-98db-87f2823bd59a\") " pod="calico-system/calico-typha-f674dc9ff-n9lvw" Jan 23 01:07:20.376149 kubelet[3167]: I0123 01:07:20.376027 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnwh9\" (UniqueName: \"kubernetes.io/projected/b0261c7c-d06a-4295-98db-87f2823bd59a-kube-api-access-mnwh9\") pod \"calico-typha-f674dc9ff-n9lvw\" (UID: \"b0261c7c-d06a-4295-98db-87f2823bd59a\") " pod="calico-system/calico-typha-f674dc9ff-n9lvw" Jan 23 01:07:20.376149 kubelet[3167]: I0123 01:07:20.376087 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0261c7c-d06a-4295-98db-87f2823bd59a-tigera-ca-bundle\") pod \"calico-typha-f674dc9ff-n9lvw\" (UID: \"b0261c7c-d06a-4295-98db-87f2823bd59a\") " pod="calico-system/calico-typha-f674dc9ff-n9lvw" Jan 23 01:07:20.461255 systemd[1]: Created slice kubepods-besteffort-pod99aeb44d_3276_4f05_9e39_e0c58cb72ec4.slice - libcontainer container kubepods-besteffort-pod99aeb44d_3276_4f05_9e39_e0c58cb72ec4.slice. Jan 23 01:07:20.477206 kubelet[3167]: I0123 01:07:20.477181 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-var-run-calico\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.477307 kubelet[3167]: I0123 01:07:20.477279 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-lib-modules\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.477478 kubelet[3167]: I0123 01:07:20.477368 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-tigera-ca-bundle\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.477478 kubelet[3167]: I0123 01:07:20.477395 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4n2g\" (UniqueName: \"kubernetes.io/projected/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-kube-api-access-f4n2g\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.477478 kubelet[3167]: I0123 01:07:20.477415 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-cni-bin-dir\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.477563 kubelet[3167]: I0123 01:07:20.477428 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-node-certs\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.477584 kubelet[3167]: I0123 01:07:20.477560 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-xtables-lock\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.477584 kubelet[3167]: I0123 01:07:20.477578 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-cni-net-dir\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.477750 kubelet[3167]: I0123 01:07:20.477727 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-flexvol-driver-host\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.478019 kubelet[3167]: I0123 01:07:20.478004 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-var-lib-calico\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.478057 kubelet[3167]: I0123 01:07:20.478029 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-cni-log-dir\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.478077 kubelet[3167]: I0123 01:07:20.478057 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/99aeb44d-3276-4f05-9e39-e0c58cb72ec4-policysync\") pod \"calico-node-44s44\" (UID: \"99aeb44d-3276-4f05-9e39-e0c58cb72ec4\") " pod="calico-system/calico-node-44s44" Jan 23 01:07:20.590190 kubelet[3167]: E0123 01:07:20.587631 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.590190 kubelet[3167]: W0123 01:07:20.587650 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.590190 kubelet[3167]: E0123 01:07:20.587668 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.591833 kubelet[3167]: E0123 01:07:20.591777 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.591833 kubelet[3167]: W0123 01:07:20.591790 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.591833 kubelet[3167]: E0123 01:07:20.591803 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.596888 containerd[1689]: time="2026-01-23T01:07:20.596709071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f674dc9ff-n9lvw,Uid:b0261c7c-d06a-4295-98db-87f2823bd59a,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:20.636376 containerd[1689]: time="2026-01-23T01:07:20.636262258Z" level=info msg="connecting to shim 382f22237a904440816ade34287e1fce23899b435d9d18d2638113395cbe76dd" address="unix:///run/containerd/s/ce05d9bd2a1bd405698e285852de859b8702f863885ddca5bf537621c0256fa0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:20.651894 kubelet[3167]: E0123 01:07:20.651862 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:07:20.669328 systemd[1]: Started cri-containerd-382f22237a904440816ade34287e1fce23899b435d9d18d2638113395cbe76dd.scope - libcontainer container 382f22237a904440816ade34287e1fce23899b435d9d18d2638113395cbe76dd. Jan 23 01:07:20.672304 kubelet[3167]: E0123 01:07:20.672255 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.672908 kubelet[3167]: W0123 01:07:20.672780 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.672908 kubelet[3167]: E0123 01:07:20.672803 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.673046 kubelet[3167]: E0123 01:07:20.673020 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.673046 kubelet[3167]: W0123 01:07:20.673033 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.673046 kubelet[3167]: E0123 01:07:20.673043 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.673208 kubelet[3167]: E0123 01:07:20.673184 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.673208 kubelet[3167]: W0123 01:07:20.673193 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.673208 kubelet[3167]: E0123 01:07:20.673201 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.673364 kubelet[3167]: E0123 01:07:20.673354 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.673364 kubelet[3167]: W0123 01:07:20.673364 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.673414 kubelet[3167]: E0123 01:07:20.673371 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.673500 kubelet[3167]: E0123 01:07:20.673491 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.673544 kubelet[3167]: W0123 01:07:20.673500 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.673544 kubelet[3167]: E0123 01:07:20.673507 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.673608 kubelet[3167]: E0123 01:07:20.673604 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.673633 kubelet[3167]: W0123 01:07:20.673608 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.673633 kubelet[3167]: E0123 01:07:20.673627 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.673845 kubelet[3167]: E0123 01:07:20.673719 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.673845 kubelet[3167]: W0123 01:07:20.673724 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.673845 kubelet[3167]: E0123 01:07:20.673731 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.674115 kubelet[3167]: E0123 01:07:20.673857 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.674115 kubelet[3167]: W0123 01:07:20.673862 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.674115 kubelet[3167]: E0123 01:07:20.673881 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.674115 kubelet[3167]: E0123 01:07:20.674010 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.674115 kubelet[3167]: W0123 01:07:20.674015 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.674115 kubelet[3167]: E0123 01:07:20.674021 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.674723 kubelet[3167]: E0123 01:07:20.674124 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.674723 kubelet[3167]: W0123 01:07:20.674149 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.674723 kubelet[3167]: E0123 01:07:20.674155 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.674723 kubelet[3167]: E0123 01:07:20.674265 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.674723 kubelet[3167]: W0123 01:07:20.674270 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.674723 kubelet[3167]: E0123 01:07:20.674275 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.674723 kubelet[3167]: E0123 01:07:20.674420 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.674723 kubelet[3167]: W0123 01:07:20.674425 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.674723 kubelet[3167]: E0123 01:07:20.674432 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.674723 kubelet[3167]: E0123 01:07:20.674550 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.675690 kubelet[3167]: W0123 01:07:20.674554 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.675690 kubelet[3167]: E0123 01:07:20.674561 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.675690 kubelet[3167]: E0123 01:07:20.674686 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.675690 kubelet[3167]: W0123 01:07:20.674692 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.675690 kubelet[3167]: E0123 01:07:20.674701 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.675690 kubelet[3167]: E0123 01:07:20.674801 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.675690 kubelet[3167]: W0123 01:07:20.674806 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.675690 kubelet[3167]: E0123 01:07:20.674812 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.675690 kubelet[3167]: E0123 01:07:20.674929 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.675690 kubelet[3167]: W0123 01:07:20.674934 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.675946 kubelet[3167]: E0123 01:07:20.674941 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.675946 kubelet[3167]: E0123 01:07:20.675052 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.675946 kubelet[3167]: W0123 01:07:20.675057 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.675946 kubelet[3167]: E0123 01:07:20.675063 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.675946 kubelet[3167]: E0123 01:07:20.675204 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.675946 kubelet[3167]: W0123 01:07:20.675210 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.675946 kubelet[3167]: E0123 01:07:20.675216 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.675946 kubelet[3167]: E0123 01:07:20.675328 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.675946 kubelet[3167]: W0123 01:07:20.675334 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.675946 kubelet[3167]: E0123 01:07:20.675340 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.676342 kubelet[3167]: E0123 01:07:20.675442 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.676342 kubelet[3167]: W0123 01:07:20.675449 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.676342 kubelet[3167]: E0123 01:07:20.675455 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.679770 kubelet[3167]: E0123 01:07:20.679755 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.679864 kubelet[3167]: W0123 01:07:20.679846 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.679916 kubelet[3167]: E0123 01:07:20.679908 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.679989 kubelet[3167]: I0123 01:07:20.679978 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/757abb7b-5fcc-4c56-ba6f-f09ed789238a-registration-dir\") pod \"csi-node-driver-c5k7j\" (UID: \"757abb7b-5fcc-4c56-ba6f-f09ed789238a\") " pod="calico-system/csi-node-driver-c5k7j" Jan 23 01:07:20.680592 kubelet[3167]: E0123 01:07:20.680466 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.680592 kubelet[3167]: W0123 01:07:20.680588 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.680748 kubelet[3167]: E0123 01:07:20.680620 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.681104 kubelet[3167]: E0123 01:07:20.681033 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.681104 kubelet[3167]: W0123 01:07:20.681069 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.681104 kubelet[3167]: E0123 01:07:20.681083 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.681420 kubelet[3167]: E0123 01:07:20.681411 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.681499 kubelet[3167]: W0123 01:07:20.681474 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.681499 kubelet[3167]: E0123 01:07:20.681486 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.681691 kubelet[3167]: I0123 01:07:20.681580 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7grzl\" (UniqueName: \"kubernetes.io/projected/757abb7b-5fcc-4c56-ba6f-f09ed789238a-kube-api-access-7grzl\") pod \"csi-node-driver-c5k7j\" (UID: \"757abb7b-5fcc-4c56-ba6f-f09ed789238a\") " pod="calico-system/csi-node-driver-c5k7j" Jan 23 01:07:20.681824 kubelet[3167]: E0123 01:07:20.681740 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.681824 kubelet[3167]: W0123 01:07:20.681751 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.681824 kubelet[3167]: E0123 01:07:20.681760 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.681824 kubelet[3167]: I0123 01:07:20.681776 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/757abb7b-5fcc-4c56-ba6f-f09ed789238a-socket-dir\") pod \"csi-node-driver-c5k7j\" (UID: \"757abb7b-5fcc-4c56-ba6f-f09ed789238a\") " pod="calico-system/csi-node-driver-c5k7j" Jan 23 01:07:20.682081 kubelet[3167]: E0123 01:07:20.681940 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.682081 kubelet[3167]: W0123 01:07:20.681951 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.682081 kubelet[3167]: E0123 01:07:20.681959 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.682081 kubelet[3167]: I0123 01:07:20.681974 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/757abb7b-5fcc-4c56-ba6f-f09ed789238a-varrun\") pod \"csi-node-driver-c5k7j\" (UID: \"757abb7b-5fcc-4c56-ba6f-f09ed789238a\") " pod="calico-system/csi-node-driver-c5k7j" Jan 23 01:07:20.682356 kubelet[3167]: E0123 01:07:20.682229 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.682356 kubelet[3167]: W0123 01:07:20.682238 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.682356 kubelet[3167]: E0123 01:07:20.682249 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.682356 kubelet[3167]: I0123 01:07:20.682265 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/757abb7b-5fcc-4c56-ba6f-f09ed789238a-kubelet-dir\") pod \"csi-node-driver-c5k7j\" (UID: \"757abb7b-5fcc-4c56-ba6f-f09ed789238a\") " pod="calico-system/csi-node-driver-c5k7j" Jan 23 01:07:20.682591 kubelet[3167]: E0123 01:07:20.682449 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.682591 kubelet[3167]: W0123 01:07:20.682467 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.682591 kubelet[3167]: E0123 01:07:20.682478 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.683481 kubelet[3167]: E0123 01:07:20.683393 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.683481 kubelet[3167]: W0123 01:07:20.683409 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.683481 kubelet[3167]: E0123 01:07:20.683422 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.683808 kubelet[3167]: E0123 01:07:20.683752 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.683808 kubelet[3167]: W0123 01:07:20.683763 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.683808 kubelet[3167]: E0123 01:07:20.683774 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.684048 kubelet[3167]: E0123 01:07:20.683993 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.684048 kubelet[3167]: W0123 01:07:20.684000 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.684048 kubelet[3167]: E0123 01:07:20.684009 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.684272 kubelet[3167]: E0123 01:07:20.684245 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.684272 kubelet[3167]: W0123 01:07:20.684253 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.684272 kubelet[3167]: E0123 01:07:20.684263 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.684538 kubelet[3167]: E0123 01:07:20.684490 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.684538 kubelet[3167]: W0123 01:07:20.684501 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.684538 kubelet[3167]: E0123 01:07:20.684509 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.684782 kubelet[3167]: E0123 01:07:20.684728 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.684782 kubelet[3167]: W0123 01:07:20.684736 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.684782 kubelet[3167]: E0123 01:07:20.684744 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.684968 kubelet[3167]: E0123 01:07:20.684961 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.685027 kubelet[3167]: W0123 01:07:20.685003 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.685027 kubelet[3167]: E0123 01:07:20.685013 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.718307 containerd[1689]: time="2026-01-23T01:07:20.718274486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f674dc9ff-n9lvw,Uid:b0261c7c-d06a-4295-98db-87f2823bd59a,Namespace:calico-system,Attempt:0,} returns sandbox id \"382f22237a904440816ade34287e1fce23899b435d9d18d2638113395cbe76dd\"" Jan 23 01:07:20.721006 containerd[1689]: time="2026-01-23T01:07:20.720973298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 01:07:20.770391 containerd[1689]: time="2026-01-23T01:07:20.770366367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-44s44,Uid:99aeb44d-3276-4f05-9e39-e0c58cb72ec4,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:20.783210 kubelet[3167]: E0123 01:07:20.783171 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.783210 kubelet[3167]: W0123 01:07:20.783185 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.783378 kubelet[3167]: E0123 01:07:20.783307 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.783537 kubelet[3167]: E0123 01:07:20.783511 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.783537 kubelet[3167]: W0123 01:07:20.783519 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.783537 kubelet[3167]: E0123 01:07:20.783527 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.783755 kubelet[3167]: E0123 01:07:20.783736 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.783755 kubelet[3167]: W0123 01:07:20.783741 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.783755 kubelet[3167]: E0123 01:07:20.783748 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.783968 kubelet[3167]: E0123 01:07:20.783963 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.784019 kubelet[3167]: W0123 01:07:20.783995 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.784019 kubelet[3167]: E0123 01:07:20.784012 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.784237 kubelet[3167]: E0123 01:07:20.784229 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.784439 kubelet[3167]: W0123 01:07:20.784269 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.784439 kubelet[3167]: E0123 01:07:20.784276 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.784439 kubelet[3167]: E0123 01:07:20.784377 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.784439 kubelet[3167]: W0123 01:07:20.784381 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.784439 kubelet[3167]: E0123 01:07:20.784387 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.785021 kubelet[3167]: E0123 01:07:20.784989 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.785021 kubelet[3167]: W0123 01:07:20.785000 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.785021 kubelet[3167]: E0123 01:07:20.785011 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.785388 kubelet[3167]: E0123 01:07:20.785373 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.785540 kubelet[3167]: W0123 01:07:20.785470 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.785540 kubelet[3167]: E0123 01:07:20.785484 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.785929 kubelet[3167]: E0123 01:07:20.785902 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.786104 kubelet[3167]: W0123 01:07:20.786057 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.786104 kubelet[3167]: E0123 01:07:20.786070 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.786770 kubelet[3167]: E0123 01:07:20.786760 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.786854 kubelet[3167]: W0123 01:07:20.786814 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.786854 kubelet[3167]: E0123 01:07:20.786825 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.787097 kubelet[3167]: E0123 01:07:20.787091 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.787097 kubelet[3167]: W0123 01:07:20.787110 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.787097 kubelet[3167]: E0123 01:07:20.787118 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.788181 kubelet[3167]: E0123 01:07:20.787286 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.788181 kubelet[3167]: W0123 01:07:20.787291 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.788181 kubelet[3167]: E0123 01:07:20.787297 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.788181 kubelet[3167]: E0123 01:07:20.787403 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.788181 kubelet[3167]: W0123 01:07:20.787407 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.788181 kubelet[3167]: E0123 01:07:20.787415 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.788181 kubelet[3167]: E0123 01:07:20.787537 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.788181 kubelet[3167]: W0123 01:07:20.787541 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.788181 kubelet[3167]: E0123 01:07:20.787546 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.788181 kubelet[3167]: E0123 01:07:20.788042 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.788401 kubelet[3167]: W0123 01:07:20.788050 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.788401 kubelet[3167]: E0123 01:07:20.788060 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.789669 kubelet[3167]: E0123 01:07:20.789213 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.789669 kubelet[3167]: W0123 01:07:20.789226 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.789669 kubelet[3167]: E0123 01:07:20.789251 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.790047 kubelet[3167]: E0123 01:07:20.790037 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.790092 kubelet[3167]: W0123 01:07:20.790084 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.790248 kubelet[3167]: E0123 01:07:20.790122 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.791233 kubelet[3167]: E0123 01:07:20.791219 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.791302 kubelet[3167]: W0123 01:07:20.791291 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.791726 kubelet[3167]: E0123 01:07:20.791664 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.791833 kubelet[3167]: E0123 01:07:20.791827 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.791871 kubelet[3167]: W0123 01:07:20.791864 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.791957 kubelet[3167]: E0123 01:07:20.791901 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.792037 kubelet[3167]: E0123 01:07:20.792032 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.792069 kubelet[3167]: W0123 01:07:20.792064 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.792216 kubelet[3167]: E0123 01:07:20.792205 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.792390 kubelet[3167]: E0123 01:07:20.792365 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.792390 kubelet[3167]: W0123 01:07:20.792373 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.792390 kubelet[3167]: E0123 01:07:20.792381 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.793272 kubelet[3167]: E0123 01:07:20.793236 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.793272 kubelet[3167]: W0123 01:07:20.793248 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.793272 kubelet[3167]: E0123 01:07:20.793260 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.793576 kubelet[3167]: E0123 01:07:20.793552 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.793576 kubelet[3167]: W0123 01:07:20.793559 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.793576 kubelet[3167]: E0123 01:07:20.793567 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.793812 kubelet[3167]: E0123 01:07:20.793792 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.793812 kubelet[3167]: W0123 01:07:20.793798 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.793812 kubelet[3167]: E0123 01:07:20.793805 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.794097 kubelet[3167]: E0123 01:07:20.794000 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.794097 kubelet[3167]: W0123 01:07:20.794006 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.794097 kubelet[3167]: E0123 01:07:20.794012 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.809456 kubelet[3167]: E0123 01:07:20.809443 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:20.809568 kubelet[3167]: W0123 01:07:20.809534 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:20.809568 kubelet[3167]: E0123 01:07:20.809547 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:20.820360 containerd[1689]: time="2026-01-23T01:07:20.820227495Z" level=info msg="connecting to shim 4979e4ee60c1aeb24f74e89c8e083457fe9069d3ca0b12f755b83891b3185b0b" address="unix:///run/containerd/s/caafe5442b13583aa2531c5977697445945aa049d2e6a9f1aee6e152cdfdb2c4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:20.855542 systemd[1]: Started cri-containerd-4979e4ee60c1aeb24f74e89c8e083457fe9069d3ca0b12f755b83891b3185b0b.scope - libcontainer container 4979e4ee60c1aeb24f74e89c8e083457fe9069d3ca0b12f755b83891b3185b0b. Jan 23 01:07:20.891704 containerd[1689]: time="2026-01-23T01:07:20.891674879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-44s44,Uid:99aeb44d-3276-4f05-9e39-e0c58cb72ec4,Namespace:calico-system,Attempt:0,} returns sandbox id \"4979e4ee60c1aeb24f74e89c8e083457fe9069d3ca0b12f755b83891b3185b0b\"" Jan 23 01:07:22.006775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995291347.mount: Deactivated successfully. Jan 23 01:07:22.475675 containerd[1689]: time="2026-01-23T01:07:22.475641425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:22.478045 containerd[1689]: time="2026-01-23T01:07:22.477977758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 01:07:22.480165 containerd[1689]: time="2026-01-23T01:07:22.480144452Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:22.483181 containerd[1689]: time="2026-01-23T01:07:22.483119896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:22.483567 containerd[1689]: time="2026-01-23T01:07:22.483458911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.762454451s" Jan 23 01:07:22.483567 containerd[1689]: time="2026-01-23T01:07:22.483482733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 01:07:22.484840 containerd[1689]: time="2026-01-23T01:07:22.484631928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:07:22.498588 containerd[1689]: time="2026-01-23T01:07:22.498565382Z" level=info msg="CreateContainer within sandbox \"382f22237a904440816ade34287e1fce23899b435d9d18d2638113395cbe76dd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 01:07:22.513381 containerd[1689]: time="2026-01-23T01:07:22.512738863Z" level=info msg="Container d3d8cbe13b5eb438e0e102dff71ad1645a54747350f7252c0fedb3aa883475cb: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:22.517769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2699233961.mount: Deactivated successfully. Jan 23 01:07:22.534414 containerd[1689]: time="2026-01-23T01:07:22.534390121Z" level=info msg="CreateContainer within sandbox \"382f22237a904440816ade34287e1fce23899b435d9d18d2638113395cbe76dd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d3d8cbe13b5eb438e0e102dff71ad1645a54747350f7252c0fedb3aa883475cb\"" Jan 23 01:07:22.534806 containerd[1689]: time="2026-01-23T01:07:22.534785369Z" level=info msg="StartContainer for \"d3d8cbe13b5eb438e0e102dff71ad1645a54747350f7252c0fedb3aa883475cb\"" Jan 23 01:07:22.535997 containerd[1689]: time="2026-01-23T01:07:22.535958632Z" level=info msg="connecting to shim d3d8cbe13b5eb438e0e102dff71ad1645a54747350f7252c0fedb3aa883475cb" address="unix:///run/containerd/s/ce05d9bd2a1bd405698e285852de859b8702f863885ddca5bf537621c0256fa0" protocol=ttrpc version=3 Jan 23 01:07:22.554428 systemd[1]: Started cri-containerd-d3d8cbe13b5eb438e0e102dff71ad1645a54747350f7252c0fedb3aa883475cb.scope - libcontainer container d3d8cbe13b5eb438e0e102dff71ad1645a54747350f7252c0fedb3aa883475cb. Jan 23 01:07:22.573141 kubelet[3167]: E0123 01:07:22.573104 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:07:22.595462 containerd[1689]: time="2026-01-23T01:07:22.595444222Z" level=info msg="StartContainer for \"d3d8cbe13b5eb438e0e102dff71ad1645a54747350f7252c0fedb3aa883475cb\" returns successfully" Jan 23 01:07:22.684304 kubelet[3167]: E0123 01:07:22.684276 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.684304 kubelet[3167]: W0123 01:07:22.684300 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.684417 kubelet[3167]: E0123 01:07:22.684317 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.684551 kubelet[3167]: E0123 01:07:22.684538 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.684586 kubelet[3167]: W0123 01:07:22.684553 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.684624 kubelet[3167]: E0123 01:07:22.684563 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.684804 kubelet[3167]: E0123 01:07:22.684789 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.684854 kubelet[3167]: W0123 01:07:22.684804 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.684854 kubelet[3167]: E0123 01:07:22.684814 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.685002 kubelet[3167]: E0123 01:07:22.684992 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.685002 kubelet[3167]: W0123 01:07:22.685000 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.685061 kubelet[3167]: E0123 01:07:22.685008 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.685178 kubelet[3167]: E0123 01:07:22.685118 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.685178 kubelet[3167]: W0123 01:07:22.685123 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.685178 kubelet[3167]: E0123 01:07:22.685140 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.685675 kubelet[3167]: E0123 01:07:22.685239 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.685675 kubelet[3167]: W0123 01:07:22.685245 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.685675 kubelet[3167]: E0123 01:07:22.685251 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.685675 kubelet[3167]: E0123 01:07:22.685349 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.685675 kubelet[3167]: W0123 01:07:22.685354 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.685675 kubelet[3167]: E0123 01:07:22.685361 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.685675 kubelet[3167]: E0123 01:07:22.685451 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.685675 kubelet[3167]: W0123 01:07:22.685456 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.685675 kubelet[3167]: E0123 01:07:22.685462 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.685675 kubelet[3167]: E0123 01:07:22.685565 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.686249 kubelet[3167]: W0123 01:07:22.685571 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.686249 kubelet[3167]: E0123 01:07:22.685607 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.686249 kubelet[3167]: E0123 01:07:22.685743 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.686249 kubelet[3167]: W0123 01:07:22.685749 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.686249 kubelet[3167]: E0123 01:07:22.685757 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.686249 kubelet[3167]: E0123 01:07:22.685848 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.686249 kubelet[3167]: W0123 01:07:22.685853 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.686249 kubelet[3167]: E0123 01:07:22.685858 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.686249 kubelet[3167]: E0123 01:07:22.685947 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.686249 kubelet[3167]: W0123 01:07:22.685952 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.686546 kubelet[3167]: E0123 01:07:22.685958 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.686546 kubelet[3167]: E0123 01:07:22.686050 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.686546 kubelet[3167]: W0123 01:07:22.686054 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.686546 kubelet[3167]: E0123 01:07:22.686060 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.686546 kubelet[3167]: E0123 01:07:22.686180 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.686546 kubelet[3167]: W0123 01:07:22.686186 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.686546 kubelet[3167]: E0123 01:07:22.686192 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.686546 kubelet[3167]: E0123 01:07:22.686279 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.686546 kubelet[3167]: W0123 01:07:22.686284 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.686546 kubelet[3167]: E0123 01:07:22.686290 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.698644 kubelet[3167]: E0123 01:07:22.698625 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.698644 kubelet[3167]: W0123 01:07:22.698640 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.698772 kubelet[3167]: E0123 01:07:22.698653 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.698815 kubelet[3167]: E0123 01:07:22.698803 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.698815 kubelet[3167]: W0123 01:07:22.698809 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.698885 kubelet[3167]: E0123 01:07:22.698817 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.699041 kubelet[3167]: E0123 01:07:22.698959 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.699041 kubelet[3167]: W0123 01:07:22.698966 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.699041 kubelet[3167]: E0123 01:07:22.698973 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.699234 kubelet[3167]: E0123 01:07:22.699226 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.699319 kubelet[3167]: W0123 01:07:22.699284 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.699319 kubelet[3167]: E0123 01:07:22.699302 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.699619 kubelet[3167]: E0123 01:07:22.699555 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.699619 kubelet[3167]: W0123 01:07:22.699566 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.699619 kubelet[3167]: E0123 01:07:22.699577 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.699945 kubelet[3167]: E0123 01:07:22.699926 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.700033 kubelet[3167]: W0123 01:07:22.699991 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.700033 kubelet[3167]: E0123 01:07:22.700004 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.700336 kubelet[3167]: E0123 01:07:22.700308 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.700336 kubelet[3167]: W0123 01:07:22.700317 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.700336 kubelet[3167]: E0123 01:07:22.700326 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.700702 kubelet[3167]: E0123 01:07:22.700651 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.700702 kubelet[3167]: W0123 01:07:22.700668 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.700702 kubelet[3167]: E0123 01:07:22.700679 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.700951 kubelet[3167]: E0123 01:07:22.700937 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.700994 kubelet[3167]: W0123 01:07:22.700952 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.700994 kubelet[3167]: E0123 01:07:22.700963 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.701353 kubelet[3167]: E0123 01:07:22.701339 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.701353 kubelet[3167]: W0123 01:07:22.701351 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.701438 kubelet[3167]: E0123 01:07:22.701361 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.702103 kubelet[3167]: E0123 01:07:22.702021 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.702103 kubelet[3167]: W0123 01:07:22.702065 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.702103 kubelet[3167]: E0123 01:07:22.702077 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.702513 kubelet[3167]: E0123 01:07:22.702274 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.702513 kubelet[3167]: W0123 01:07:22.702282 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.702513 kubelet[3167]: E0123 01:07:22.702290 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.702513 kubelet[3167]: E0123 01:07:22.702402 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.702513 kubelet[3167]: W0123 01:07:22.702407 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.702513 kubelet[3167]: E0123 01:07:22.702414 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.702675 kubelet[3167]: E0123 01:07:22.702609 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.702675 kubelet[3167]: W0123 01:07:22.702615 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.702675 kubelet[3167]: E0123 01:07:22.702622 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.703409 kubelet[3167]: E0123 01:07:22.703392 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.703409 kubelet[3167]: W0123 01:07:22.703408 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.703911 kubelet[3167]: E0123 01:07:22.703421 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.703911 kubelet[3167]: E0123 01:07:22.703584 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.703911 kubelet[3167]: W0123 01:07:22.703594 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.703911 kubelet[3167]: E0123 01:07:22.703602 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.704240 kubelet[3167]: E0123 01:07:22.704122 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.704292 kubelet[3167]: W0123 01:07:22.704241 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.704292 kubelet[3167]: E0123 01:07:22.704254 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:22.704403 kubelet[3167]: E0123 01:07:22.704394 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:22.704439 kubelet[3167]: W0123 01:07:22.704404 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:22.704439 kubelet[3167]: E0123 01:07:22.704412 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.655783 kubelet[3167]: I0123 01:07:23.655488 3167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:07:23.692835 kubelet[3167]: E0123 01:07:23.692725 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.692835 kubelet[3167]: W0123 01:07:23.692751 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.692835 kubelet[3167]: E0123 01:07:23.692769 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.693062 kubelet[3167]: E0123 01:07:23.693047 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.693105 kubelet[3167]: W0123 01:07:23.693059 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.693105 kubelet[3167]: E0123 01:07:23.693077 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.693242 kubelet[3167]: E0123 01:07:23.693218 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.693242 kubelet[3167]: W0123 01:07:23.693236 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.693310 kubelet[3167]: E0123 01:07:23.693244 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.693339 kubelet[3167]: E0123 01:07:23.693336 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.693339 kubelet[3167]: W0123 01:07:23.693340 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.693408 kubelet[3167]: E0123 01:07:23.693347 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.693454 kubelet[3167]: E0123 01:07:23.693440 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.693454 kubelet[3167]: W0123 01:07:23.693445 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.693530 kubelet[3167]: E0123 01:07:23.693451 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.693558 kubelet[3167]: E0123 01:07:23.693532 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.693558 kubelet[3167]: W0123 01:07:23.693537 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.693558 kubelet[3167]: E0123 01:07:23.693542 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.693674 kubelet[3167]: E0123 01:07:23.693620 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.693674 kubelet[3167]: W0123 01:07:23.693625 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.693674 kubelet[3167]: E0123 01:07:23.693631 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.693766 kubelet[3167]: E0123 01:07:23.693710 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.693766 kubelet[3167]: W0123 01:07:23.693714 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.693766 kubelet[3167]: E0123 01:07:23.693721 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.693882 kubelet[3167]: E0123 01:07:23.693804 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.693882 kubelet[3167]: W0123 01:07:23.693810 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.693882 kubelet[3167]: E0123 01:07:23.693816 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.693981 kubelet[3167]: E0123 01:07:23.693892 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.693981 kubelet[3167]: W0123 01:07:23.693897 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.693981 kubelet[3167]: E0123 01:07:23.693902 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.693981 kubelet[3167]: E0123 01:07:23.693975 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.693981 kubelet[3167]: W0123 01:07:23.693980 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.694147 kubelet[3167]: E0123 01:07:23.693985 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.694147 kubelet[3167]: E0123 01:07:23.694072 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.694147 kubelet[3167]: W0123 01:07:23.694076 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.694147 kubelet[3167]: E0123 01:07:23.694083 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.694267 kubelet[3167]: E0123 01:07:23.694177 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.694267 kubelet[3167]: W0123 01:07:23.694181 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.694267 kubelet[3167]: E0123 01:07:23.694187 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.694358 kubelet[3167]: E0123 01:07:23.694343 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.694358 kubelet[3167]: W0123 01:07:23.694349 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.694358 kubelet[3167]: E0123 01:07:23.694355 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.694467 kubelet[3167]: E0123 01:07:23.694433 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.694467 kubelet[3167]: W0123 01:07:23.694437 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.694467 kubelet[3167]: E0123 01:07:23.694443 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.698152 containerd[1689]: time="2026-01-23T01:07:23.698096275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:23.700823 containerd[1689]: time="2026-01-23T01:07:23.700747002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 01:07:23.703471 containerd[1689]: time="2026-01-23T01:07:23.703281079Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:23.705835 kubelet[3167]: E0123 01:07:23.705812 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.706101 kubelet[3167]: W0123 01:07:23.705930 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.706101 kubelet[3167]: E0123 01:07:23.705953 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.706523 kubelet[3167]: E0123 01:07:23.706416 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.706523 kubelet[3167]: W0123 01:07:23.706428 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.706523 kubelet[3167]: E0123 01:07:23.706439 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.706757 kubelet[3167]: E0123 01:07:23.706716 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.706757 kubelet[3167]: W0123 01:07:23.706725 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.706757 kubelet[3167]: E0123 01:07:23.706735 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.707232 containerd[1689]: time="2026-01-23T01:07:23.707184154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:23.707384 kubelet[3167]: E0123 01:07:23.707290 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.707384 kubelet[3167]: W0123 01:07:23.707302 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.707384 kubelet[3167]: E0123 01:07:23.707314 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.707891 kubelet[3167]: E0123 01:07:23.707875 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.707891 kubelet[3167]: W0123 01:07:23.707891 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.708072 kubelet[3167]: E0123 01:07:23.707902 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.708488 containerd[1689]: time="2026-01-23T01:07:23.708427447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.223767583s" Jan 23 01:07:23.708652 kubelet[3167]: E0123 01:07:23.708598 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.708652 kubelet[3167]: W0123 01:07:23.708611 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.708652 kubelet[3167]: E0123 01:07:23.708623 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.708831 containerd[1689]: time="2026-01-23T01:07:23.708727637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:07:23.709982 kubelet[3167]: E0123 01:07:23.709962 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.710080 kubelet[3167]: W0123 01:07:23.710070 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.710278 kubelet[3167]: E0123 01:07:23.710121 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.710433 kubelet[3167]: E0123 01:07:23.710423 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.710505 kubelet[3167]: W0123 01:07:23.710496 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.710549 kubelet[3167]: E0123 01:07:23.710541 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.710758 kubelet[3167]: E0123 01:07:23.710749 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.710828 kubelet[3167]: W0123 01:07:23.710801 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.710998 kubelet[3167]: E0123 01:07:23.710871 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.711443 kubelet[3167]: E0123 01:07:23.711322 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.711443 kubelet[3167]: W0123 01:07:23.711343 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.711443 kubelet[3167]: E0123 01:07:23.711357 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.711824 kubelet[3167]: E0123 01:07:23.711754 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.711824 kubelet[3167]: W0123 01:07:23.711765 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.711824 kubelet[3167]: E0123 01:07:23.711778 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.712340 kubelet[3167]: E0123 01:07:23.712313 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.712424 kubelet[3167]: W0123 01:07:23.712416 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.712479 kubelet[3167]: E0123 01:07:23.712471 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.712710 kubelet[3167]: E0123 01:07:23.712703 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.712781 kubelet[3167]: W0123 01:07:23.712746 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.712781 kubelet[3167]: E0123 01:07:23.712757 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.713175 kubelet[3167]: E0123 01:07:23.713107 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.713175 kubelet[3167]: W0123 01:07:23.713116 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.713349 kubelet[3167]: E0123 01:07:23.713253 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.713490 kubelet[3167]: E0123 01:07:23.713467 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.713568 kubelet[3167]: W0123 01:07:23.713524 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.713568 kubelet[3167]: E0123 01:07:23.713536 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.713808 kubelet[3167]: E0123 01:07:23.713745 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.713808 kubelet[3167]: W0123 01:07:23.713754 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.713808 kubelet[3167]: E0123 01:07:23.713764 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.714095 kubelet[3167]: E0123 01:07:23.714086 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.714404 kubelet[3167]: W0123 01:07:23.714158 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.714404 kubelet[3167]: E0123 01:07:23.714171 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.715394 kubelet[3167]: E0123 01:07:23.714899 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:23.715394 kubelet[3167]: W0123 01:07:23.714910 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:23.715394 kubelet[3167]: E0123 01:07:23.714921 3167 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:23.717147 containerd[1689]: time="2026-01-23T01:07:23.716976300Z" level=info msg="CreateContainer within sandbox \"4979e4ee60c1aeb24f74e89c8e083457fe9069d3ca0b12f755b83891b3185b0b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:07:23.735248 containerd[1689]: time="2026-01-23T01:07:23.735217171Z" level=info msg="Container 660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:23.748554 containerd[1689]: time="2026-01-23T01:07:23.748529987Z" level=info msg="CreateContainer within sandbox \"4979e4ee60c1aeb24f74e89c8e083457fe9069d3ca0b12f755b83891b3185b0b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb\"" Jan 23 01:07:23.748839 containerd[1689]: time="2026-01-23T01:07:23.748816742Z" level=info msg="StartContainer for \"660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb\"" Jan 23 01:07:23.750305 containerd[1689]: time="2026-01-23T01:07:23.750280695Z" level=info msg="connecting to shim 660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb" address="unix:///run/containerd/s/caafe5442b13583aa2531c5977697445945aa049d2e6a9f1aee6e152cdfdb2c4" protocol=ttrpc version=3 Jan 23 01:07:23.774252 systemd[1]: Started cri-containerd-660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb.scope - libcontainer container 660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb. Jan 23 01:07:23.822080 containerd[1689]: time="2026-01-23T01:07:23.822053101Z" level=info msg="StartContainer for \"660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb\" returns successfully" Jan 23 01:07:23.826939 systemd[1]: cri-containerd-660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb.scope: Deactivated successfully. Jan 23 01:07:23.829984 containerd[1689]: time="2026-01-23T01:07:23.829950538Z" level=info msg="received container exit event container_id:\"660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb\" id:\"660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb\" pid:3873 exited_at:{seconds:1769130443 nanos:829632717}" Jan 23 01:07:23.844486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-660467aea5ba67a41ab9261f91df52ac9b90ac774fac1571243368e7c4faa0fb-rootfs.mount: Deactivated successfully. Jan 23 01:07:24.572763 kubelet[3167]: E0123 01:07:24.572728 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:07:24.675876 kubelet[3167]: I0123 01:07:24.675113 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f674dc9ff-n9lvw" podStartSLOduration=2.911632938 podStartE2EDuration="4.675097378s" podCreationTimestamp="2026-01-23 01:07:20 +0000 UTC" firstStartedPulling="2026-01-23 01:07:20.720584227 +0000 UTC m=+19.240042282" lastFinishedPulling="2026-01-23 01:07:22.484048667 +0000 UTC m=+21.003506722" observedRunningTime="2026-01-23 01:07:22.679909039 +0000 UTC m=+21.199367109" watchObservedRunningTime="2026-01-23 01:07:24.675097378 +0000 UTC m=+23.194555543" Jan 23 01:07:26.573074 kubelet[3167]: E0123 01:07:26.573018 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:07:26.666147 containerd[1689]: time="2026-01-23T01:07:26.666054959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:07:28.572636 kubelet[3167]: E0123 01:07:28.572578 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:07:30.070890 containerd[1689]: time="2026-01-23T01:07:30.070843027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:30.073332 containerd[1689]: time="2026-01-23T01:07:30.073304697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:07:30.075838 containerd[1689]: time="2026-01-23T01:07:30.075797641Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:30.083213 containerd[1689]: time="2026-01-23T01:07:30.083147274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:30.084084 containerd[1689]: time="2026-01-23T01:07:30.083744640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.417636202s" Jan 23 01:07:30.084084 containerd[1689]: time="2026-01-23T01:07:30.083787408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:07:30.089742 containerd[1689]: time="2026-01-23T01:07:30.089705292Z" level=info msg="CreateContainer within sandbox \"4979e4ee60c1aeb24f74e89c8e083457fe9069d3ca0b12f755b83891b3185b0b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:07:30.106474 containerd[1689]: time="2026-01-23T01:07:30.106440301Z" level=info msg="Container 020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:30.121008 containerd[1689]: time="2026-01-23T01:07:30.120986113Z" level=info msg="CreateContainer within sandbox \"4979e4ee60c1aeb24f74e89c8e083457fe9069d3ca0b12f755b83891b3185b0b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451\"" Jan 23 01:07:30.121467 containerd[1689]: time="2026-01-23T01:07:30.121447927Z" level=info msg="StartContainer for \"020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451\"" Jan 23 01:07:30.122792 containerd[1689]: time="2026-01-23T01:07:30.122758689Z" level=info msg="connecting to shim 020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451" address="unix:///run/containerd/s/caafe5442b13583aa2531c5977697445945aa049d2e6a9f1aee6e152cdfdb2c4" protocol=ttrpc version=3 Jan 23 01:07:30.142296 systemd[1]: Started cri-containerd-020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451.scope - libcontainer container 020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451. Jan 23 01:07:30.208354 containerd[1689]: time="2026-01-23T01:07:30.208332147Z" level=info msg="StartContainer for \"020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451\" returns successfully" Jan 23 01:07:30.572395 kubelet[3167]: E0123 01:07:30.572345 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:07:31.284168 containerd[1689]: time="2026-01-23T01:07:31.284111578Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:07:31.286164 systemd[1]: cri-containerd-020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451.scope: Deactivated successfully. Jan 23 01:07:31.286402 systemd[1]: cri-containerd-020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451.scope: Consumed 379ms CPU time, 191.5M memory peak, 171.3M written to disk. Jan 23 01:07:31.288260 containerd[1689]: time="2026-01-23T01:07:31.288226397Z" level=info msg="received container exit event container_id:\"020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451\" id:\"020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451\" pid:3936 exited_at:{seconds:1769130451 nanos:287665528}" Jan 23 01:07:31.305212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-020696d5dc13d4f5290888c51c8f458bee14435e09614c19107995501075e451-rootfs.mount: Deactivated successfully. Jan 23 01:07:31.310559 kubelet[3167]: I0123 01:07:31.310060 3167 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 01:07:31.592427 systemd[1]: Created slice kubepods-besteffort-pod962f2b06_4328_49cd_8756_14448bb2c728.slice - libcontainer container kubepods-besteffort-pod962f2b06_4328_49cd_8756_14448bb2c728.slice. Jan 23 01:07:31.664280 kubelet[3167]: I0123 01:07:31.664217 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5rtl\" (UniqueName: \"kubernetes.io/projected/962f2b06-4328-49cd-8756-14448bb2c728-kube-api-access-j5rtl\") pod \"whisker-6859b968b9-6fknr\" (UID: \"962f2b06-4328-49cd-8756-14448bb2c728\") " pod="calico-system/whisker-6859b968b9-6fknr" Jan 23 01:07:31.664280 kubelet[3167]: I0123 01:07:31.664250 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/962f2b06-4328-49cd-8756-14448bb2c728-whisker-backend-key-pair\") pod \"whisker-6859b968b9-6fknr\" (UID: \"962f2b06-4328-49cd-8756-14448bb2c728\") " pod="calico-system/whisker-6859b968b9-6fknr" Jan 23 01:07:31.664588 kubelet[3167]: I0123 01:07:31.664291 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962f2b06-4328-49cd-8756-14448bb2c728-whisker-ca-bundle\") pod \"whisker-6859b968b9-6fknr\" (UID: \"962f2b06-4328-49cd-8756-14448bb2c728\") " pod="calico-system/whisker-6859b968b9-6fknr" Jan 23 01:07:32.147715 systemd[1]: Created slice kubepods-besteffort-pod9a0795c2_7ecf_4504_8071_c68e46a2784c.slice - libcontainer container kubepods-besteffort-pod9a0795c2_7ecf_4504_8071_c68e46a2784c.slice. Jan 23 01:07:32.166491 kubelet[3167]: I0123 01:07:32.166456 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gw5q\" (UniqueName: \"kubernetes.io/projected/9a0795c2-7ecf-4504-8071-c68e46a2784c-kube-api-access-5gw5q\") pod \"calico-apiserver-6cb5db5c6d-685h6\" (UID: \"9a0795c2-7ecf-4504-8071-c68e46a2784c\") " pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" Jan 23 01:07:32.166491 kubelet[3167]: I0123 01:07:32.166494 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a0795c2-7ecf-4504-8071-c68e46a2784c-calico-apiserver-certs\") pod \"calico-apiserver-6cb5db5c6d-685h6\" (UID: \"9a0795c2-7ecf-4504-8071-c68e46a2784c\") " pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" Jan 23 01:07:32.188706 systemd[1]: Created slice kubepods-burstable-podd445968f_7574_4f43_9e96_ac6fb7bf12f4.slice - libcontainer container kubepods-burstable-podd445968f_7574_4f43_9e96_ac6fb7bf12f4.slice. Jan 23 01:07:32.197652 systemd[1]: Created slice kubepods-burstable-pod5c478b45_6d16_4dea_9945_700ba45b5350.slice - libcontainer container kubepods-burstable-pod5c478b45_6d16_4dea_9945_700ba45b5350.slice. Jan 23 01:07:32.200643 containerd[1689]: time="2026-01-23T01:07:32.199682964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6859b968b9-6fknr,Uid:962f2b06-4328-49cd-8756-14448bb2c728,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:32.216681 systemd[1]: Created slice kubepods-besteffort-pod464a8745_942a_406e_a6f7_99a7e252e57c.slice - libcontainer container kubepods-besteffort-pod464a8745_942a_406e_a6f7_99a7e252e57c.slice. Jan 23 01:07:32.234407 systemd[1]: Created slice kubepods-besteffort-pod4dd5a4fb_a52c_4429_a2cb_1aa7fea80b6f.slice - libcontainer container kubepods-besteffort-pod4dd5a4fb_a52c_4429_a2cb_1aa7fea80b6f.slice. Jan 23 01:07:32.243079 systemd[1]: Created slice kubepods-besteffort-pod8938b6cd_2993_4177_a47a_bf7c96438cfc.slice - libcontainer container kubepods-besteffort-pod8938b6cd_2993_4177_a47a_bf7c96438cfc.slice. Jan 23 01:07:32.249598 systemd[1]: Created slice kubepods-besteffort-podc928b9b3_da34_4326_8f7b_130857d457b5.slice - libcontainer container kubepods-besteffort-podc928b9b3_da34_4326_8f7b_130857d457b5.slice. Jan 23 01:07:32.267491 kubelet[3167]: I0123 01:07:32.267050 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f-calico-apiserver-certs\") pod \"calico-apiserver-6cb5db5c6d-qkg5z\" (UID: \"4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f\") " pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" Jan 23 01:07:32.267491 kubelet[3167]: I0123 01:07:32.267087 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c928b9b3-da34-4326-8f7b-130857d457b5-config\") pod \"goldmane-7c778bb748-tdsjl\" (UID: \"c928b9b3-da34-4326-8f7b-130857d457b5\") " pod="calico-system/goldmane-7c778bb748-tdsjl" Jan 23 01:07:32.267491 kubelet[3167]: I0123 01:07:32.267169 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c478b45-6d16-4dea-9945-700ba45b5350-config-volume\") pod \"coredns-66bc5c9577-kgxwh\" (UID: \"5c478b45-6d16-4dea-9945-700ba45b5350\") " pod="kube-system/coredns-66bc5c9577-kgxwh" Jan 23 01:07:32.267491 kubelet[3167]: I0123 01:07:32.267185 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d445968f-7574-4f43-9e96-ac6fb7bf12f4-config-volume\") pod \"coredns-66bc5c9577-gl4bv\" (UID: \"d445968f-7574-4f43-9e96-ac6fb7bf12f4\") " pod="kube-system/coredns-66bc5c9577-gl4bv" Jan 23 01:07:32.267491 kubelet[3167]: I0123 01:07:32.267200 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpmlg\" (UniqueName: \"kubernetes.io/projected/5c478b45-6d16-4dea-9945-700ba45b5350-kube-api-access-kpmlg\") pod \"coredns-66bc5c9577-kgxwh\" (UID: \"5c478b45-6d16-4dea-9945-700ba45b5350\") " pod="kube-system/coredns-66bc5c9577-kgxwh" Jan 23 01:07:32.268282 kubelet[3167]: I0123 01:07:32.267247 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c928b9b3-da34-4326-8f7b-130857d457b5-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-tdsjl\" (UID: \"c928b9b3-da34-4326-8f7b-130857d457b5\") " pod="calico-system/goldmane-7c778bb748-tdsjl" Jan 23 01:07:32.268282 kubelet[3167]: I0123 01:07:32.267277 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8938b6cd-2993-4177-a47a-bf7c96438cfc-calico-apiserver-certs\") pod \"calico-apiserver-5b9b5df79c-pfx6f\" (UID: \"8938b6cd-2993-4177-a47a-bf7c96438cfc\") " pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" Jan 23 01:07:32.268282 kubelet[3167]: I0123 01:07:32.267371 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/464a8745-942a-406e-a6f7-99a7e252e57c-tigera-ca-bundle\") pod \"calico-kube-controllers-5db5b8969f-b7ffs\" (UID: \"464a8745-942a-406e-a6f7-99a7e252e57c\") " pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" Jan 23 01:07:32.268282 kubelet[3167]: I0123 01:07:32.267392 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7dhf\" (UniqueName: \"kubernetes.io/projected/464a8745-942a-406e-a6f7-99a7e252e57c-kube-api-access-v7dhf\") pod \"calico-kube-controllers-5db5b8969f-b7ffs\" (UID: \"464a8745-942a-406e-a6f7-99a7e252e57c\") " pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" Jan 23 01:07:32.268282 kubelet[3167]: I0123 01:07:32.267423 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9zpv\" (UniqueName: \"kubernetes.io/projected/d445968f-7574-4f43-9e96-ac6fb7bf12f4-kube-api-access-v9zpv\") pod \"coredns-66bc5c9577-gl4bv\" (UID: \"d445968f-7574-4f43-9e96-ac6fb7bf12f4\") " pod="kube-system/coredns-66bc5c9577-gl4bv" Jan 23 01:07:32.269149 kubelet[3167]: I0123 01:07:32.267759 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c928b9b3-da34-4326-8f7b-130857d457b5-goldmane-key-pair\") pod \"goldmane-7c778bb748-tdsjl\" (UID: \"c928b9b3-da34-4326-8f7b-130857d457b5\") " pod="calico-system/goldmane-7c778bb748-tdsjl" Jan 23 01:07:32.269149 kubelet[3167]: I0123 01:07:32.267780 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkz5n\" (UniqueName: \"kubernetes.io/projected/c928b9b3-da34-4326-8f7b-130857d457b5-kube-api-access-kkz5n\") pod \"goldmane-7c778bb748-tdsjl\" (UID: \"c928b9b3-da34-4326-8f7b-130857d457b5\") " pod="calico-system/goldmane-7c778bb748-tdsjl" Jan 23 01:07:32.269149 kubelet[3167]: I0123 01:07:32.267800 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxv96\" (UniqueName: \"kubernetes.io/projected/8938b6cd-2993-4177-a47a-bf7c96438cfc-kube-api-access-pxv96\") pod \"calico-apiserver-5b9b5df79c-pfx6f\" (UID: \"8938b6cd-2993-4177-a47a-bf7c96438cfc\") " pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" Jan 23 01:07:32.269149 kubelet[3167]: I0123 01:07:32.267815 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvzh\" (UniqueName: \"kubernetes.io/projected/4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f-kube-api-access-xcvzh\") pod \"calico-apiserver-6cb5db5c6d-qkg5z\" (UID: \"4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f\") " pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" Jan 23 01:07:32.287684 containerd[1689]: time="2026-01-23T01:07:32.287645071Z" level=error msg="Failed to destroy network for sandbox \"ea3ad879af7b4d913e9ff2092b72ddc55310779a4f8f4c169896007aa75e9735\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.290368 containerd[1689]: time="2026-01-23T01:07:32.290331056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6859b968b9-6fknr,Uid:962f2b06-4328-49cd-8756-14448bb2c728,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3ad879af7b4d913e9ff2092b72ddc55310779a4f8f4c169896007aa75e9735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.290518 kubelet[3167]: E0123 01:07:32.290489 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3ad879af7b4d913e9ff2092b72ddc55310779a4f8f4c169896007aa75e9735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.290580 kubelet[3167]: E0123 01:07:32.290538 3167 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3ad879af7b4d913e9ff2092b72ddc55310779a4f8f4c169896007aa75e9735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6859b968b9-6fknr" Jan 23 01:07:32.290580 kubelet[3167]: E0123 01:07:32.290554 3167 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea3ad879af7b4d913e9ff2092b72ddc55310779a4f8f4c169896007aa75e9735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6859b968b9-6fknr" Jan 23 01:07:32.290648 kubelet[3167]: E0123 01:07:32.290601 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6859b968b9-6fknr_calico-system(962f2b06-4328-49cd-8756-14448bb2c728)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6859b968b9-6fknr_calico-system(962f2b06-4328-49cd-8756-14448bb2c728)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea3ad879af7b4d913e9ff2092b72ddc55310779a4f8f4c169896007aa75e9735\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6859b968b9-6fknr" podUID="962f2b06-4328-49cd-8756-14448bb2c728" Jan 23 01:07:32.305770 systemd[1]: run-netns-cni\x2d96754af1\x2d183d\x2de2f0\x2d9033\x2deaf29719cdec.mount: Deactivated successfully. Jan 23 01:07:32.456261 containerd[1689]: time="2026-01-23T01:07:32.456195331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5db5c6d-685h6,Uid:9a0795c2-7ecf-4504-8071-c68e46a2784c,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:07:32.498254 containerd[1689]: time="2026-01-23T01:07:32.498219338Z" level=error msg="Failed to destroy network for sandbox \"2c88307dc960f01dc948ce327f283f1eb4895caf3866f1af3f097cbcff643f18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.499002 containerd[1689]: time="2026-01-23T01:07:32.498982856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gl4bv,Uid:d445968f-7574-4f43-9e96-ac6fb7bf12f4,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:32.500851 containerd[1689]: time="2026-01-23T01:07:32.500765946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5db5c6d-685h6,Uid:9a0795c2-7ecf-4504-8071-c68e46a2784c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c88307dc960f01dc948ce327f283f1eb4895caf3866f1af3f097cbcff643f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.500967 kubelet[3167]: E0123 01:07:32.500934 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c88307dc960f01dc948ce327f283f1eb4895caf3866f1af3f097cbcff643f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.501007 kubelet[3167]: E0123 01:07:32.500970 3167 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c88307dc960f01dc948ce327f283f1eb4895caf3866f1af3f097cbcff643f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" Jan 23 01:07:32.501032 kubelet[3167]: E0123 01:07:32.500987 3167 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c88307dc960f01dc948ce327f283f1eb4895caf3866f1af3f097cbcff643f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" Jan 23 01:07:32.501065 kubelet[3167]: E0123 01:07:32.501050 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cb5db5c6d-685h6_calico-apiserver(9a0795c2-7ecf-4504-8071-c68e46a2784c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cb5db5c6d-685h6_calico-apiserver(9a0795c2-7ecf-4504-8071-c68e46a2784c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c88307dc960f01dc948ce327f283f1eb4895caf3866f1af3f097cbcff643f18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:07:32.508346 containerd[1689]: time="2026-01-23T01:07:32.508325227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kgxwh,Uid:5c478b45-6d16-4dea-9945-700ba45b5350,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:32.534285 containerd[1689]: time="2026-01-23T01:07:32.534243139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5db5b8969f-b7ffs,Uid:464a8745-942a-406e-a6f7-99a7e252e57c,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:32.548522 containerd[1689]: time="2026-01-23T01:07:32.548279826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5db5c6d-qkg5z,Uid:4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:07:32.553350 containerd[1689]: time="2026-01-23T01:07:32.553328681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b5df79c-pfx6f,Uid:8938b6cd-2993-4177-a47a-bf7c96438cfc,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:07:32.558871 containerd[1689]: time="2026-01-23T01:07:32.558847051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tdsjl,Uid:c928b9b3-da34-4326-8f7b-130857d457b5,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:32.579263 systemd[1]: Created slice kubepods-besteffort-pod757abb7b_5fcc_4c56_ba6f_f09ed789238a.slice - libcontainer container kubepods-besteffort-pod757abb7b_5fcc_4c56_ba6f_f09ed789238a.slice. Jan 23 01:07:32.585023 containerd[1689]: time="2026-01-23T01:07:32.584993094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5k7j,Uid:757abb7b-5fcc-4c56-ba6f-f09ed789238a,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:32.601108 containerd[1689]: time="2026-01-23T01:07:32.601078353Z" level=error msg="Failed to destroy network for sandbox \"d2c6ca987530ab1a89f973a8b5a52965827c680143ec331057a583fd4f190200\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.601393 containerd[1689]: time="2026-01-23T01:07:32.601373190Z" level=error msg="Failed to destroy network for sandbox \"d907a5470389f410e431b3368a695c705c4b57c166041b9f46d37fc3192a69c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.604319 containerd[1689]: time="2026-01-23T01:07:32.604289011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kgxwh,Uid:5c478b45-6d16-4dea-9945-700ba45b5350,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d907a5470389f410e431b3368a695c705c4b57c166041b9f46d37fc3192a69c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.605851 kubelet[3167]: E0123 01:07:32.605682 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d907a5470389f410e431b3368a695c705c4b57c166041b9f46d37fc3192a69c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.606150 kubelet[3167]: E0123 01:07:32.605965 3167 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d907a5470389f410e431b3368a695c705c4b57c166041b9f46d37fc3192a69c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kgxwh" Jan 23 01:07:32.606150 kubelet[3167]: E0123 01:07:32.605988 3167 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d907a5470389f410e431b3368a695c705c4b57c166041b9f46d37fc3192a69c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kgxwh" Jan 23 01:07:32.606150 kubelet[3167]: E0123 01:07:32.606034 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kgxwh_kube-system(5c478b45-6d16-4dea-9945-700ba45b5350)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kgxwh_kube-system(5c478b45-6d16-4dea-9945-700ba45b5350)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d907a5470389f410e431b3368a695c705c4b57c166041b9f46d37fc3192a69c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kgxwh" podUID="5c478b45-6d16-4dea-9945-700ba45b5350" Jan 23 01:07:32.609391 containerd[1689]: time="2026-01-23T01:07:32.609353176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gl4bv,Uid:d445968f-7574-4f43-9e96-ac6fb7bf12f4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2c6ca987530ab1a89f973a8b5a52965827c680143ec331057a583fd4f190200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.609627 kubelet[3167]: E0123 01:07:32.609606 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2c6ca987530ab1a89f973a8b5a52965827c680143ec331057a583fd4f190200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.609882 kubelet[3167]: E0123 01:07:32.609795 3167 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2c6ca987530ab1a89f973a8b5a52965827c680143ec331057a583fd4f190200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gl4bv" Jan 23 01:07:32.609882 kubelet[3167]: E0123 01:07:32.609816 3167 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2c6ca987530ab1a89f973a8b5a52965827c680143ec331057a583fd4f190200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gl4bv" Jan 23 01:07:32.609882 kubelet[3167]: E0123 01:07:32.609860 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gl4bv_kube-system(d445968f-7574-4f43-9e96-ac6fb7bf12f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gl4bv_kube-system(d445968f-7574-4f43-9e96-ac6fb7bf12f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2c6ca987530ab1a89f973a8b5a52965827c680143ec331057a583fd4f190200\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gl4bv" podUID="d445968f-7574-4f43-9e96-ac6fb7bf12f4" Jan 23 01:07:32.671148 containerd[1689]: time="2026-01-23T01:07:32.671083330Z" level=error msg="Failed to destroy network for sandbox \"20d8df5bb9ee0bcc9bc9aa00d38a98907544578f686534794fde4385b87873ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.675425 containerd[1689]: time="2026-01-23T01:07:32.675270349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5db5b8969f-b7ffs,Uid:464a8745-942a-406e-a6f7-99a7e252e57c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d8df5bb9ee0bcc9bc9aa00d38a98907544578f686534794fde4385b87873ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.675889 kubelet[3167]: E0123 01:07:32.675859 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d8df5bb9ee0bcc9bc9aa00d38a98907544578f686534794fde4385b87873ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.676192 kubelet[3167]: E0123 01:07:32.675906 3167 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d8df5bb9ee0bcc9bc9aa00d38a98907544578f686534794fde4385b87873ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" Jan 23 01:07:32.676192 kubelet[3167]: E0123 01:07:32.675960 3167 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d8df5bb9ee0bcc9bc9aa00d38a98907544578f686534794fde4385b87873ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" Jan 23 01:07:32.676192 kubelet[3167]: E0123 01:07:32.676075 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5db5b8969f-b7ffs_calico-system(464a8745-942a-406e-a6f7-99a7e252e57c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5db5b8969f-b7ffs_calico-system(464a8745-942a-406e-a6f7-99a7e252e57c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20d8df5bb9ee0bcc9bc9aa00d38a98907544578f686534794fde4385b87873ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:07:32.678931 containerd[1689]: time="2026-01-23T01:07:32.678886695Z" level=error msg="Failed to destroy network for sandbox \"1bd333f3d7e731bb658cc6bbcfe56c3c15da1ac70cb22865fe591de52781352c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.683515 containerd[1689]: time="2026-01-23T01:07:32.683437120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5k7j,Uid:757abb7b-5fcc-4c56-ba6f-f09ed789238a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd333f3d7e731bb658cc6bbcfe56c3c15da1ac70cb22865fe591de52781352c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.683897 kubelet[3167]: E0123 01:07:32.683689 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd333f3d7e731bb658cc6bbcfe56c3c15da1ac70cb22865fe591de52781352c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.683897 kubelet[3167]: E0123 01:07:32.683841 3167 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd333f3d7e731bb658cc6bbcfe56c3c15da1ac70cb22865fe591de52781352c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c5k7j" Jan 23 01:07:32.683897 kubelet[3167]: E0123 01:07:32.683864 3167 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd333f3d7e731bb658cc6bbcfe56c3c15da1ac70cb22865fe591de52781352c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c5k7j" Jan 23 01:07:32.684102 kubelet[3167]: E0123 01:07:32.684042 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c5k7j_calico-system(757abb7b-5fcc-4c56-ba6f-f09ed789238a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c5k7j_calico-system(757abb7b-5fcc-4c56-ba6f-f09ed789238a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bd333f3d7e731bb658cc6bbcfe56c3c15da1ac70cb22865fe591de52781352c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:07:32.698219 containerd[1689]: time="2026-01-23T01:07:32.698154060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:07:32.702244 containerd[1689]: time="2026-01-23T01:07:32.702208296Z" level=error msg="Failed to destroy network for sandbox \"0ad019788cbe713208dd9a36e1f1d59e65b7e484a4a3762697d8bcc3b2d310cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.710277 containerd[1689]: time="2026-01-23T01:07:32.710200936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tdsjl,Uid:c928b9b3-da34-4326-8f7b-130857d457b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad019788cbe713208dd9a36e1f1d59e65b7e484a4a3762697d8bcc3b2d310cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.711179 kubelet[3167]: E0123 01:07:32.710551 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad019788cbe713208dd9a36e1f1d59e65b7e484a4a3762697d8bcc3b2d310cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.711179 kubelet[3167]: E0123 01:07:32.710590 3167 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad019788cbe713208dd9a36e1f1d59e65b7e484a4a3762697d8bcc3b2d310cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tdsjl" Jan 23 01:07:32.711179 kubelet[3167]: E0123 01:07:32.710609 3167 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad019788cbe713208dd9a36e1f1d59e65b7e484a4a3762697d8bcc3b2d310cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tdsjl" Jan 23 01:07:32.711288 kubelet[3167]: E0123 01:07:32.710643 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-tdsjl_calico-system(c928b9b3-da34-4326-8f7b-130857d457b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-tdsjl_calico-system(c928b9b3-da34-4326-8f7b-130857d457b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ad019788cbe713208dd9a36e1f1d59e65b7e484a4a3762697d8bcc3b2d310cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:07:32.714284 containerd[1689]: time="2026-01-23T01:07:32.714242778Z" level=error msg="Failed to destroy network for sandbox \"07f8904f1b871ad9207513700d9e4eeb2edddcb0f2625d8e201f93486669b944\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.717415 containerd[1689]: time="2026-01-23T01:07:32.717256005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5db5c6d-qkg5z,Uid:4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f8904f1b871ad9207513700d9e4eeb2edddcb0f2625d8e201f93486669b944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.718472 kubelet[3167]: E0123 01:07:32.718317 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f8904f1b871ad9207513700d9e4eeb2edddcb0f2625d8e201f93486669b944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.718472 kubelet[3167]: E0123 01:07:32.718354 3167 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f8904f1b871ad9207513700d9e4eeb2edddcb0f2625d8e201f93486669b944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" Jan 23 01:07:32.718472 kubelet[3167]: E0123 01:07:32.718371 3167 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f8904f1b871ad9207513700d9e4eeb2edddcb0f2625d8e201f93486669b944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" Jan 23 01:07:32.718600 kubelet[3167]: E0123 01:07:32.718414 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cb5db5c6d-qkg5z_calico-apiserver(4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cb5db5c6d-qkg5z_calico-apiserver(4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07f8904f1b871ad9207513700d9e4eeb2edddcb0f2625d8e201f93486669b944\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:07:32.730322 containerd[1689]: time="2026-01-23T01:07:32.730285903Z" level=error msg="Failed to destroy network for sandbox \"55661cfbbca8abf1b1c4d7d46cfca7d09c714774d279bbabfec47ec7ef9dd686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.732637 containerd[1689]: time="2026-01-23T01:07:32.732606524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b5df79c-pfx6f,Uid:8938b6cd-2993-4177-a47a-bf7c96438cfc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55661cfbbca8abf1b1c4d7d46cfca7d09c714774d279bbabfec47ec7ef9dd686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.732782 kubelet[3167]: E0123 01:07:32.732755 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55661cfbbca8abf1b1c4d7d46cfca7d09c714774d279bbabfec47ec7ef9dd686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:32.732832 kubelet[3167]: E0123 01:07:32.732798 3167 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55661cfbbca8abf1b1c4d7d46cfca7d09c714774d279bbabfec47ec7ef9dd686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" Jan 23 01:07:32.732832 kubelet[3167]: E0123 01:07:32.732817 3167 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55661cfbbca8abf1b1c4d7d46cfca7d09c714774d279bbabfec47ec7ef9dd686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" Jan 23 01:07:32.732902 kubelet[3167]: E0123 01:07:32.732866 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b9b5df79c-pfx6f_calico-apiserver(8938b6cd-2993-4177-a47a-bf7c96438cfc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b9b5df79c-pfx6f_calico-apiserver(8938b6cd-2993-4177-a47a-bf7c96438cfc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55661cfbbca8abf1b1c4d7d46cfca7d09c714774d279bbabfec47ec7ef9dd686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:07:37.075505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626270107.mount: Deactivated successfully. Jan 23 01:07:37.094923 containerd[1689]: time="2026-01-23T01:07:37.094880715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:37.097140 containerd[1689]: time="2026-01-23T01:07:37.097067269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:07:37.099476 containerd[1689]: time="2026-01-23T01:07:37.099440797Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:37.102552 containerd[1689]: time="2026-01-23T01:07:37.102420635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:37.102968 containerd[1689]: time="2026-01-23T01:07:37.102712697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.404529974s" Jan 23 01:07:37.102968 containerd[1689]: time="2026-01-23T01:07:37.102738577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:07:37.119990 containerd[1689]: time="2026-01-23T01:07:37.119961068Z" level=info msg="CreateContainer within sandbox \"4979e4ee60c1aeb24f74e89c8e083457fe9069d3ca0b12f755b83891b3185b0b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:07:37.135154 containerd[1689]: time="2026-01-23T01:07:37.134710669Z" level=info msg="Container 8eb89fbd9c5a2e8e69f09945207bf36dfc612930f43846ae62b927378fb07c15: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:37.150416 containerd[1689]: time="2026-01-23T01:07:37.150391482Z" level=info msg="CreateContainer within sandbox \"4979e4ee60c1aeb24f74e89c8e083457fe9069d3ca0b12f755b83891b3185b0b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8eb89fbd9c5a2e8e69f09945207bf36dfc612930f43846ae62b927378fb07c15\"" Jan 23 01:07:37.150827 containerd[1689]: time="2026-01-23T01:07:37.150810003Z" level=info msg="StartContainer for \"8eb89fbd9c5a2e8e69f09945207bf36dfc612930f43846ae62b927378fb07c15\"" Jan 23 01:07:37.152331 containerd[1689]: time="2026-01-23T01:07:37.152304366Z" level=info msg="connecting to shim 8eb89fbd9c5a2e8e69f09945207bf36dfc612930f43846ae62b927378fb07c15" address="unix:///run/containerd/s/caafe5442b13583aa2531c5977697445945aa049d2e6a9f1aee6e152cdfdb2c4" protocol=ttrpc version=3 Jan 23 01:07:37.168654 systemd[1]: Started cri-containerd-8eb89fbd9c5a2e8e69f09945207bf36dfc612930f43846ae62b927378fb07c15.scope - libcontainer container 8eb89fbd9c5a2e8e69f09945207bf36dfc612930f43846ae62b927378fb07c15. Jan 23 01:07:37.218501 containerd[1689]: time="2026-01-23T01:07:37.218481668Z" level=info msg="StartContainer for \"8eb89fbd9c5a2e8e69f09945207bf36dfc612930f43846ae62b927378fb07c15\" returns successfully" Jan 23 01:07:37.594475 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:07:37.594560 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:07:37.744535 kubelet[3167]: I0123 01:07:37.744483 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-44s44" podStartSLOduration=1.533623035 podStartE2EDuration="17.744458435s" podCreationTimestamp="2026-01-23 01:07:20 +0000 UTC" firstStartedPulling="2026-01-23 01:07:20.892578093 +0000 UTC m=+19.412036151" lastFinishedPulling="2026-01-23 01:07:37.103413495 +0000 UTC m=+35.622871551" observedRunningTime="2026-01-23 01:07:37.743409945 +0000 UTC m=+36.262868027" watchObservedRunningTime="2026-01-23 01:07:37.744458435 +0000 UTC m=+36.263916500" Jan 23 01:07:37.797665 kubelet[3167]: I0123 01:07:37.797641 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5rtl\" (UniqueName: \"kubernetes.io/projected/962f2b06-4328-49cd-8756-14448bb2c728-kube-api-access-j5rtl\") pod \"962f2b06-4328-49cd-8756-14448bb2c728\" (UID: \"962f2b06-4328-49cd-8756-14448bb2c728\") " Jan 23 01:07:37.797780 kubelet[3167]: I0123 01:07:37.797673 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/962f2b06-4328-49cd-8756-14448bb2c728-whisker-backend-key-pair\") pod \"962f2b06-4328-49cd-8756-14448bb2c728\" (UID: \"962f2b06-4328-49cd-8756-14448bb2c728\") " Jan 23 01:07:37.797780 kubelet[3167]: I0123 01:07:37.797687 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962f2b06-4328-49cd-8756-14448bb2c728-whisker-ca-bundle\") pod \"962f2b06-4328-49cd-8756-14448bb2c728\" (UID: \"962f2b06-4328-49cd-8756-14448bb2c728\") " Jan 23 01:07:37.799693 kubelet[3167]: I0123 01:07:37.799667 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962f2b06-4328-49cd-8756-14448bb2c728-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "962f2b06-4328-49cd-8756-14448bb2c728" (UID: "962f2b06-4328-49cd-8756-14448bb2c728"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:07:37.802165 kubelet[3167]: I0123 01:07:37.801690 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962f2b06-4328-49cd-8756-14448bb2c728-kube-api-access-j5rtl" (OuterVolumeSpecName: "kube-api-access-j5rtl") pod "962f2b06-4328-49cd-8756-14448bb2c728" (UID: "962f2b06-4328-49cd-8756-14448bb2c728"). InnerVolumeSpecName "kube-api-access-j5rtl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:07:37.802380 kubelet[3167]: I0123 01:07:37.802365 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962f2b06-4328-49cd-8756-14448bb2c728-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "962f2b06-4328-49cd-8756-14448bb2c728" (UID: "962f2b06-4328-49cd-8756-14448bb2c728"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:07:37.898922 kubelet[3167]: I0123 01:07:37.898641 3167 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j5rtl\" (UniqueName: \"kubernetes.io/projected/962f2b06-4328-49cd-8756-14448bb2c728-kube-api-access-j5rtl\") on node \"ci-4459.2.2-n-059e17308a\" DevicePath \"\"" Jan 23 01:07:37.898922 kubelet[3167]: I0123 01:07:37.898666 3167 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/962f2b06-4328-49cd-8756-14448bb2c728-whisker-backend-key-pair\") on node \"ci-4459.2.2-n-059e17308a\" DevicePath \"\"" Jan 23 01:07:37.898922 kubelet[3167]: I0123 01:07:37.898676 3167 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962f2b06-4328-49cd-8756-14448bb2c728-whisker-ca-bundle\") on node \"ci-4459.2.2-n-059e17308a\" DevicePath \"\"" Jan 23 01:07:38.074427 systemd[1]: var-lib-kubelet-pods-962f2b06\x2d4328\x2d49cd\x2d8756\x2d14448bb2c728-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 01:07:38.074514 systemd[1]: var-lib-kubelet-pods-962f2b06\x2d4328\x2d49cd\x2d8756\x2d14448bb2c728-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj5rtl.mount: Deactivated successfully. Jan 23 01:07:38.715326 systemd[1]: Removed slice kubepods-besteffort-pod962f2b06_4328_49cd_8756_14448bb2c728.slice - libcontainer container kubepods-besteffort-pod962f2b06_4328_49cd_8756_14448bb2c728.slice. Jan 23 01:07:38.796302 systemd[1]: Created slice kubepods-besteffort-pod2cc70e7d_4b4a_4947_a27a_3aa84d2bff8f.slice - libcontainer container kubepods-besteffort-pod2cc70e7d_4b4a_4947_a27a_3aa84d2bff8f.slice. Jan 23 01:07:38.904417 kubelet[3167]: I0123 01:07:38.904351 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wbff\" (UniqueName: \"kubernetes.io/projected/2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f-kube-api-access-7wbff\") pod \"whisker-8f6bd7b4f-tth7c\" (UID: \"2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f\") " pod="calico-system/whisker-8f6bd7b4f-tth7c" Jan 23 01:07:38.904734 kubelet[3167]: I0123 01:07:38.904422 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f-whisker-ca-bundle\") pod \"whisker-8f6bd7b4f-tth7c\" (UID: \"2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f\") " pod="calico-system/whisker-8f6bd7b4f-tth7c" Jan 23 01:07:38.904734 kubelet[3167]: I0123 01:07:38.904487 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f-whisker-backend-key-pair\") pod \"whisker-8f6bd7b4f-tth7c\" (UID: \"2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f\") " pod="calico-system/whisker-8f6bd7b4f-tth7c" Jan 23 01:07:39.105381 containerd[1689]: time="2026-01-23T01:07:39.105297761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f6bd7b4f-tth7c,Uid:2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:39.259696 systemd-networkd[1477]: cali6dc8057c53d: Link UP Jan 23 01:07:39.261549 systemd-networkd[1477]: cali6dc8057c53d: Gained carrier Jan 23 01:07:39.275586 containerd[1689]: 2026-01-23 01:07:39.150 [INFO][4423] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:07:39.275586 containerd[1689]: 2026-01-23 01:07:39.164 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0 whisker-8f6bd7b4f- calico-system 2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f 891 0 2026-01-23 01:07:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8f6bd7b4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-n-059e17308a whisker-8f6bd7b4f-tth7c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6dc8057c53d [] [] }} ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Namespace="calico-system" Pod="whisker-8f6bd7b4f-tth7c" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-" Jan 23 01:07:39.275586 containerd[1689]: 2026-01-23 01:07:39.164 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Namespace="calico-system" Pod="whisker-8f6bd7b4f-tth7c" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0" Jan 23 01:07:39.275586 containerd[1689]: 2026-01-23 01:07:39.200 [INFO][4434] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" HandleID="k8s-pod-network.497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Workload="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0" Jan 23 01:07:39.275911 containerd[1689]: 2026-01-23 01:07:39.200 [INFO][4434] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" HandleID="k8s-pod-network.497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Workload="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5130), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-059e17308a", "pod":"whisker-8f6bd7b4f-tth7c", "timestamp":"2026-01-23 01:07:39.200617143 +0000 UTC"}, Hostname:"ci-4459.2.2-n-059e17308a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:07:39.275911 containerd[1689]: 2026-01-23 01:07:39.200 [INFO][4434] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:07:39.275911 containerd[1689]: 2026-01-23 01:07:39.200 [INFO][4434] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:07:39.275911 containerd[1689]: 2026-01-23 01:07:39.200 [INFO][4434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-059e17308a' Jan 23 01:07:39.275911 containerd[1689]: 2026-01-23 01:07:39.206 [INFO][4434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:39.275911 containerd[1689]: 2026-01-23 01:07:39.211 [INFO][4434] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:39.275911 containerd[1689]: 2026-01-23 01:07:39.215 [INFO][4434] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:39.275911 containerd[1689]: 2026-01-23 01:07:39.219 [INFO][4434] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:39.275911 containerd[1689]: 2026-01-23 01:07:39.221 [INFO][4434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:39.276196 containerd[1689]: 2026-01-23 01:07:39.222 [INFO][4434] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:39.276196 containerd[1689]: 2026-01-23 01:07:39.223 [INFO][4434] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d Jan 23 01:07:39.276196 containerd[1689]: 2026-01-23 01:07:39.230 [INFO][4434] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:39.276196 containerd[1689]: 2026-01-23 01:07:39.238 [INFO][4434] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.129/26] block=192.168.42.128/26 handle="k8s-pod-network.497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:39.276196 containerd[1689]: 2026-01-23 01:07:39.239 [INFO][4434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.129/26] handle="k8s-pod-network.497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:39.276196 containerd[1689]: 2026-01-23 01:07:39.239 [INFO][4434] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:07:39.276196 containerd[1689]: 2026-01-23 01:07:39.239 [INFO][4434] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.129/26] IPv6=[] ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" HandleID="k8s-pod-network.497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Workload="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0" Jan 23 01:07:39.276298 containerd[1689]: 2026-01-23 01:07:39.244 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Namespace="calico-system" Pod="whisker-8f6bd7b4f-tth7c" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0", GenerateName:"whisker-8f6bd7b4f-", Namespace:"calico-system", SelfLink:"", UID:"2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8f6bd7b4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"", Pod:"whisker-8f6bd7b4f-tth7c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6dc8057c53d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:39.276298 containerd[1689]: 2026-01-23 01:07:39.244 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.129/32] ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Namespace="calico-system" Pod="whisker-8f6bd7b4f-tth7c" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0" Jan 23 01:07:39.276387 containerd[1689]: 2026-01-23 01:07:39.244 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dc8057c53d ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Namespace="calico-system" Pod="whisker-8f6bd7b4f-tth7c" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0" Jan 23 01:07:39.276387 containerd[1689]: 2026-01-23 01:07:39.262 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Namespace="calico-system" Pod="whisker-8f6bd7b4f-tth7c" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0" Jan 23 01:07:39.276477 containerd[1689]: 2026-01-23 01:07:39.262 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Namespace="calico-system" Pod="whisker-8f6bd7b4f-tth7c" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0", GenerateName:"whisker-8f6bd7b4f-", Namespace:"calico-system", SelfLink:"", UID:"2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8f6bd7b4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d", Pod:"whisker-8f6bd7b4f-tth7c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6dc8057c53d", MAC:"ce:cf:5d:7d:92:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:39.276554 containerd[1689]: 2026-01-23 01:07:39.273 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" Namespace="calico-system" Pod="whisker-8f6bd7b4f-tth7c" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-whisker--8f6bd7b4f--tth7c-eth0" Jan 23 01:07:39.309552 containerd[1689]: time="2026-01-23T01:07:39.309519505Z" level=info msg="connecting to shim 497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d" address="unix:///run/containerd/s/e57c8658ebe304b2de2d4c5c46f7c73ae5026b4a564ac8d9e7bf1373e386db5f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:39.332274 systemd[1]: Started cri-containerd-497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d.scope - libcontainer container 497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d. Jan 23 01:07:39.366810 containerd[1689]: time="2026-01-23T01:07:39.366752362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f6bd7b4f-tth7c,Uid:2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"497aac02776c85edf20a77f2564267047ce95f64d7700adc20cc7cf5ed75cc4d\"" Jan 23 01:07:39.368515 containerd[1689]: time="2026-01-23T01:07:39.368409904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:07:39.574892 kubelet[3167]: I0123 01:07:39.574858 3167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962f2b06-4328-49cd-8756-14448bb2c728" path="/var/lib/kubelet/pods/962f2b06-4328-49cd-8756-14448bb2c728/volumes" Jan 23 01:07:39.645380 containerd[1689]: time="2026-01-23T01:07:39.645296266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:39.647983 containerd[1689]: time="2026-01-23T01:07:39.647943251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:07:39.647983 containerd[1689]: time="2026-01-23T01:07:39.647968568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:07:39.648120 kubelet[3167]: E0123 01:07:39.648094 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:07:39.648173 kubelet[3167]: E0123 01:07:39.648155 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:07:39.648248 kubelet[3167]: E0123 01:07:39.648230 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8f6bd7b4f-tth7c_calico-system(2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:39.649006 containerd[1689]: time="2026-01-23T01:07:39.648977355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:07:39.884598 containerd[1689]: time="2026-01-23T01:07:39.884546454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:39.887219 containerd[1689]: time="2026-01-23T01:07:39.887177669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:07:39.887290 containerd[1689]: time="2026-01-23T01:07:39.887265460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:07:39.887508 kubelet[3167]: E0123 01:07:39.887445 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:07:39.887567 kubelet[3167]: E0123 01:07:39.887519 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:07:39.887665 kubelet[3167]: E0123 01:07:39.887624 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8f6bd7b4f-tth7c_calico-system(2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:39.887907 kubelet[3167]: E0123 01:07:39.887687 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:07:40.593308 systemd-networkd[1477]: cali6dc8057c53d: Gained IPv6LL Jan 23 01:07:40.714593 kubelet[3167]: E0123 01:07:40.714532 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:07:44.577798 containerd[1689]: time="2026-01-23T01:07:44.577737859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5db5b8969f-b7ffs,Uid:464a8745-942a-406e-a6f7-99a7e252e57c,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:44.653400 systemd-networkd[1477]: calicbbb5cd0176: Link UP Jan 23 01:07:44.653924 systemd-networkd[1477]: calicbbb5cd0176: Gained carrier Jan 23 01:07:44.666495 containerd[1689]: 2026-01-23 01:07:44.599 [INFO][4609] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:07:44.666495 containerd[1689]: 2026-01-23 01:07:44.606 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0 calico-kube-controllers-5db5b8969f- calico-system 464a8745-942a-406e-a6f7-99a7e252e57c 823 0 2026-01-23 01:07:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5db5b8969f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-n-059e17308a calico-kube-controllers-5db5b8969f-b7ffs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicbbb5cd0176 [] [] }} ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Namespace="calico-system" Pod="calico-kube-controllers-5db5b8969f-b7ffs" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-" Jan 23 01:07:44.666495 containerd[1689]: 2026-01-23 01:07:44.606 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Namespace="calico-system" Pod="calico-kube-controllers-5db5b8969f-b7ffs" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0" Jan 23 01:07:44.666495 containerd[1689]: 2026-01-23 01:07:44.624 [INFO][4622] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" HandleID="k8s-pod-network.98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0" Jan 23 01:07:44.666732 containerd[1689]: 2026-01-23 01:07:44.624 [INFO][4622] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" HandleID="k8s-pod-network.98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-059e17308a", "pod":"calico-kube-controllers-5db5b8969f-b7ffs", "timestamp":"2026-01-23 01:07:44.624577777 +0000 UTC"}, Hostname:"ci-4459.2.2-n-059e17308a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:07:44.666732 containerd[1689]: 2026-01-23 01:07:44.624 [INFO][4622] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:07:44.666732 containerd[1689]: 2026-01-23 01:07:44.624 [INFO][4622] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:07:44.666732 containerd[1689]: 2026-01-23 01:07:44.624 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-059e17308a' Jan 23 01:07:44.666732 containerd[1689]: 2026-01-23 01:07:44.629 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:44.666732 containerd[1689]: 2026-01-23 01:07:44.631 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:44.666732 containerd[1689]: 2026-01-23 01:07:44.634 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:44.666732 containerd[1689]: 2026-01-23 01:07:44.635 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:44.666732 containerd[1689]: 2026-01-23 01:07:44.636 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:44.666982 containerd[1689]: 2026-01-23 01:07:44.636 [INFO][4622] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:44.666982 containerd[1689]: 2026-01-23 01:07:44.637 [INFO][4622] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2 Jan 23 01:07:44.666982 containerd[1689]: 2026-01-23 01:07:44.642 [INFO][4622] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:44.666982 containerd[1689]: 2026-01-23 01:07:44.649 [INFO][4622] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.130/26] block=192.168.42.128/26 handle="k8s-pod-network.98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:44.666982 containerd[1689]: 2026-01-23 01:07:44.650 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.130/26] handle="k8s-pod-network.98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:44.666982 containerd[1689]: 2026-01-23 01:07:44.650 [INFO][4622] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:07:44.666982 containerd[1689]: 2026-01-23 01:07:44.650 [INFO][4622] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.130/26] IPv6=[] ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" HandleID="k8s-pod-network.98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0" Jan 23 01:07:44.667185 containerd[1689]: 2026-01-23 01:07:44.651 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Namespace="calico-system" Pod="calico-kube-controllers-5db5b8969f-b7ffs" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0", GenerateName:"calico-kube-controllers-5db5b8969f-", Namespace:"calico-system", SelfLink:"", UID:"464a8745-942a-406e-a6f7-99a7e252e57c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5db5b8969f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"", Pod:"calico-kube-controllers-5db5b8969f-b7ffs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicbbb5cd0176", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:44.667327 containerd[1689]: 2026-01-23 01:07:44.651 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.130/32] ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Namespace="calico-system" Pod="calico-kube-controllers-5db5b8969f-b7ffs" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0" Jan 23 01:07:44.667327 containerd[1689]: 2026-01-23 01:07:44.651 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbbb5cd0176 ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Namespace="calico-system" Pod="calico-kube-controllers-5db5b8969f-b7ffs" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0" Jan 23 01:07:44.667327 containerd[1689]: 2026-01-23 01:07:44.654 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Namespace="calico-system" Pod="calico-kube-controllers-5db5b8969f-b7ffs" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0" Jan 23 01:07:44.667427 containerd[1689]: 2026-01-23 01:07:44.654 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Namespace="calico-system" Pod="calico-kube-controllers-5db5b8969f-b7ffs" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0", GenerateName:"calico-kube-controllers-5db5b8969f-", Namespace:"calico-system", SelfLink:"", UID:"464a8745-942a-406e-a6f7-99a7e252e57c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5db5b8969f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2", Pod:"calico-kube-controllers-5db5b8969f-b7ffs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicbbb5cd0176", MAC:"62:3d:96:c3:9d:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:44.667502 containerd[1689]: 2026-01-23 01:07:44.663 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" Namespace="calico-system" Pod="calico-kube-controllers-5db5b8969f-b7ffs" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--kube--controllers--5db5b8969f--b7ffs-eth0" Jan 23 01:07:44.701706 containerd[1689]: time="2026-01-23T01:07:44.701640477Z" level=info msg="connecting to shim 98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2" address="unix:///run/containerd/s/80303f817a3e9b1098c8e6ad2a5a33d9049e24d6c56091dbff97183389bf5421" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:44.725254 systemd[1]: Started cri-containerd-98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2.scope - libcontainer container 98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2. Jan 23 01:07:44.762540 containerd[1689]: time="2026-01-23T01:07:44.762521261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5db5b8969f-b7ffs,Uid:464a8745-942a-406e-a6f7-99a7e252e57c,Namespace:calico-system,Attempt:0,} returns sandbox id \"98c1913e7c4a68f9c0795542b8023a6a7216185f75a5526ef28a1da4d8424bb2\"" Jan 23 01:07:44.764029 containerd[1689]: time="2026-01-23T01:07:44.763954390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:07:45.003280 containerd[1689]: time="2026-01-23T01:07:45.003242780Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:45.005691 containerd[1689]: time="2026-01-23T01:07:45.005664417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:07:45.005732 containerd[1689]: time="2026-01-23T01:07:45.005672923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:07:45.005874 kubelet[3167]: E0123 01:07:45.005840 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:07:45.006161 kubelet[3167]: E0123 01:07:45.005881 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:07:45.006161 kubelet[3167]: E0123 01:07:45.005956 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5db5b8969f-b7ffs_calico-system(464a8745-942a-406e-a6f7-99a7e252e57c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:45.006161 kubelet[3167]: E0123 01:07:45.005988 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:07:45.293446 kubelet[3167]: I0123 01:07:45.293150 3167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:07:45.581382 containerd[1689]: time="2026-01-23T01:07:45.581296970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tdsjl,Uid:c928b9b3-da34-4326-8f7b-130857d457b5,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:45.584535 containerd[1689]: time="2026-01-23T01:07:45.584463330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gl4bv,Uid:d445968f-7574-4f43-9e96-ac6fb7bf12f4,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:45.725917 kubelet[3167]: E0123 01:07:45.725528 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:07:45.759192 systemd-networkd[1477]: cali78dd343cbcc: Link UP Jan 23 01:07:45.760950 systemd-networkd[1477]: cali78dd343cbcc: Gained carrier Jan 23 01:07:45.776330 containerd[1689]: 2026-01-23 01:07:45.649 [INFO][4736] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0 goldmane-7c778bb748- calico-system c928b9b3-da34-4326-8f7b-130857d457b5 826 0 2026-01-23 01:07:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-n-059e17308a goldmane-7c778bb748-tdsjl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali78dd343cbcc [] [] }} ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Namespace="calico-system" Pod="goldmane-7c778bb748-tdsjl" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-" Jan 23 01:07:45.776330 containerd[1689]: 2026-01-23 01:07:45.649 [INFO][4736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Namespace="calico-system" Pod="goldmane-7c778bb748-tdsjl" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0" Jan 23 01:07:45.776330 containerd[1689]: 2026-01-23 01:07:45.700 [INFO][4758] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" HandleID="k8s-pod-network.8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Workload="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0" Jan 23 01:07:45.776495 containerd[1689]: 2026-01-23 01:07:45.700 [INFO][4758] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" HandleID="k8s-pod-network.8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Workload="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-059e17308a", "pod":"goldmane-7c778bb748-tdsjl", "timestamp":"2026-01-23 01:07:45.700499355 +0000 UTC"}, Hostname:"ci-4459.2.2-n-059e17308a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:07:45.776495 containerd[1689]: 2026-01-23 01:07:45.700 [INFO][4758] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:07:45.776495 containerd[1689]: 2026-01-23 01:07:45.700 [INFO][4758] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:07:45.776495 containerd[1689]: 2026-01-23 01:07:45.700 [INFO][4758] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-059e17308a' Jan 23 01:07:45.776495 containerd[1689]: 2026-01-23 01:07:45.708 [INFO][4758] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.776495 containerd[1689]: 2026-01-23 01:07:45.711 [INFO][4758] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.776495 containerd[1689]: 2026-01-23 01:07:45.715 [INFO][4758] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.776495 containerd[1689]: 2026-01-23 01:07:45.717 [INFO][4758] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.776495 containerd[1689]: 2026-01-23 01:07:45.718 [INFO][4758] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.776686 containerd[1689]: 2026-01-23 01:07:45.718 [INFO][4758] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.776686 containerd[1689]: 2026-01-23 01:07:45.721 [INFO][4758] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3 Jan 23 01:07:45.776686 containerd[1689]: 2026-01-23 01:07:45.731 [INFO][4758] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.776686 containerd[1689]: 2026-01-23 01:07:45.748 [INFO][4758] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.131/26] block=192.168.42.128/26 handle="k8s-pod-network.8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.776686 containerd[1689]: 2026-01-23 01:07:45.748 [INFO][4758] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.131/26] handle="k8s-pod-network.8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.776686 containerd[1689]: 2026-01-23 01:07:45.748 [INFO][4758] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:07:45.776686 containerd[1689]: 2026-01-23 01:07:45.748 [INFO][4758] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.131/26] IPv6=[] ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" HandleID="k8s-pod-network.8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Workload="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0" Jan 23 01:07:45.777337 containerd[1689]: 2026-01-23 01:07:45.750 [INFO][4736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Namespace="calico-system" Pod="goldmane-7c778bb748-tdsjl" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c928b9b3-da34-4326-8f7b-130857d457b5", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"", Pod:"goldmane-7c778bb748-tdsjl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali78dd343cbcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:45.777402 containerd[1689]: 2026-01-23 01:07:45.750 [INFO][4736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.131/32] ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Namespace="calico-system" Pod="goldmane-7c778bb748-tdsjl" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0" Jan 23 01:07:45.777402 containerd[1689]: 2026-01-23 01:07:45.750 [INFO][4736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78dd343cbcc ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Namespace="calico-system" Pod="goldmane-7c778bb748-tdsjl" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0" Jan 23 01:07:45.777402 containerd[1689]: 2026-01-23 01:07:45.759 [INFO][4736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Namespace="calico-system" Pod="goldmane-7c778bb748-tdsjl" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0" Jan 23 01:07:45.777444 containerd[1689]: 2026-01-23 01:07:45.759 [INFO][4736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Namespace="calico-system" Pod="goldmane-7c778bb748-tdsjl" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c928b9b3-da34-4326-8f7b-130857d457b5", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3", Pod:"goldmane-7c778bb748-tdsjl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali78dd343cbcc", MAC:"1e:d9:14:09:f5:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:45.777682 containerd[1689]: 2026-01-23 01:07:45.773 [INFO][4736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" Namespace="calico-system" Pod="goldmane-7c778bb748-tdsjl" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-goldmane--7c778bb748--tdsjl-eth0" Jan 23 01:07:45.825535 containerd[1689]: time="2026-01-23T01:07:45.825495260Z" level=info msg="connecting to shim 8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3" address="unix:///run/containerd/s/77a800a05a94f9d97012744633d143c4272ef8c181272b70c66455fcd5920f5c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:45.871309 systemd[1]: Started cri-containerd-8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3.scope - libcontainer container 8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3. Jan 23 01:07:45.872291 systemd-networkd[1477]: calic70f5558dbf: Link UP Jan 23 01:07:45.874120 systemd-networkd[1477]: calic70f5558dbf: Gained carrier Jan 23 01:07:45.889475 containerd[1689]: 2026-01-23 01:07:45.650 [INFO][4740] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0 coredns-66bc5c9577- kube-system d445968f-7574-4f43-9e96-ac6fb7bf12f4 821 0 2026-01-23 01:07:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-n-059e17308a coredns-66bc5c9577-gl4bv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic70f5558dbf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Namespace="kube-system" Pod="coredns-66bc5c9577-gl4bv" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-" Jan 23 01:07:45.889475 containerd[1689]: 2026-01-23 01:07:45.650 [INFO][4740] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Namespace="kube-system" Pod="coredns-66bc5c9577-gl4bv" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0" Jan 23 01:07:45.889475 containerd[1689]: 2026-01-23 01:07:45.703 [INFO][4760] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" HandleID="k8s-pod-network.b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Workload="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0" Jan 23 01:07:45.889626 containerd[1689]: 2026-01-23 01:07:45.703 [INFO][4760] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" HandleID="k8s-pod-network.b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Workload="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-n-059e17308a", "pod":"coredns-66bc5c9577-gl4bv", "timestamp":"2026-01-23 01:07:45.703702342 +0000 UTC"}, Hostname:"ci-4459.2.2-n-059e17308a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:07:45.889626 containerd[1689]: 2026-01-23 01:07:45.703 [INFO][4760] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:07:45.889626 containerd[1689]: 2026-01-23 01:07:45.748 [INFO][4760] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:07:45.889626 containerd[1689]: 2026-01-23 01:07:45.748 [INFO][4760] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-059e17308a' Jan 23 01:07:45.889626 containerd[1689]: 2026-01-23 01:07:45.809 [INFO][4760] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.889626 containerd[1689]: 2026-01-23 01:07:45.817 [INFO][4760] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.889626 containerd[1689]: 2026-01-23 01:07:45.823 [INFO][4760] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.889626 containerd[1689]: 2026-01-23 01:07:45.826 [INFO][4760] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.889626 containerd[1689]: 2026-01-23 01:07:45.828 [INFO][4760] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.889827 containerd[1689]: 2026-01-23 01:07:45.828 [INFO][4760] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.889827 containerd[1689]: 2026-01-23 01:07:45.830 [INFO][4760] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb Jan 23 01:07:45.889827 containerd[1689]: 2026-01-23 01:07:45.842 [INFO][4760] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.889827 containerd[1689]: 2026-01-23 01:07:45.855 [INFO][4760] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.132/26] block=192.168.42.128/26 handle="k8s-pod-network.b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.889827 containerd[1689]: 2026-01-23 01:07:45.855 [INFO][4760] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.132/26] handle="k8s-pod-network.b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:45.889827 containerd[1689]: 2026-01-23 01:07:45.856 [INFO][4760] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:07:45.889827 containerd[1689]: 2026-01-23 01:07:45.856 [INFO][4760] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.132/26] IPv6=[] ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" HandleID="k8s-pod-network.b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Workload="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0" Jan 23 01:07:45.889967 containerd[1689]: 2026-01-23 01:07:45.860 [INFO][4740] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Namespace="kube-system" Pod="coredns-66bc5c9577-gl4bv" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d445968f-7574-4f43-9e96-ac6fb7bf12f4", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"", Pod:"coredns-66bc5c9577-gl4bv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic70f5558dbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:45.889967 containerd[1689]: 2026-01-23 01:07:45.861 [INFO][4740] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.132/32] ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Namespace="kube-system" Pod="coredns-66bc5c9577-gl4bv" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0" Jan 23 01:07:45.889967 containerd[1689]: 2026-01-23 01:07:45.861 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic70f5558dbf ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Namespace="kube-system" Pod="coredns-66bc5c9577-gl4bv" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0" Jan 23 01:07:45.889967 containerd[1689]: 2026-01-23 01:07:45.876 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Namespace="kube-system" Pod="coredns-66bc5c9577-gl4bv" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0" Jan 23 01:07:45.889967 containerd[1689]: 2026-01-23 01:07:45.876 [INFO][4740] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Namespace="kube-system" Pod="coredns-66bc5c9577-gl4bv" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d445968f-7574-4f43-9e96-ac6fb7bf12f4", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb", Pod:"coredns-66bc5c9577-gl4bv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic70f5558dbf", MAC:"96:98:16:1f:ea:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:45.891238 containerd[1689]: 2026-01-23 01:07:45.888 [INFO][4740] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" Namespace="kube-system" Pod="coredns-66bc5c9577-gl4bv" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--gl4bv-eth0" Jan 23 01:07:45.947444 systemd-networkd[1477]: vxlan.calico: Link UP Jan 23 01:07:45.948359 systemd-networkd[1477]: vxlan.calico: Gained carrier Jan 23 01:07:45.956244 containerd[1689]: time="2026-01-23T01:07:45.956212790Z" level=info msg="connecting to shim b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb" address="unix:///run/containerd/s/c92e497ee4b4797b79464cbfdf4f5ce5ea9313cf70b6a535009f2e5e9474fa96" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:45.962587 containerd[1689]: time="2026-01-23T01:07:45.962556846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tdsjl,Uid:c928b9b3-da34-4326-8f7b-130857d457b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e2584b9587f1efe421553f47b9813a8809ed15737de2a3f0387e0bd50e739d3\"" Jan 23 01:07:45.968955 containerd[1689]: time="2026-01-23T01:07:45.968917821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:07:46.007391 systemd[1]: Started cri-containerd-b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb.scope - libcontainer container b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb. Jan 23 01:07:46.072870 containerd[1689]: time="2026-01-23T01:07:46.072836133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gl4bv,Uid:d445968f-7574-4f43-9e96-ac6fb7bf12f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb\"" Jan 23 01:07:46.079864 containerd[1689]: time="2026-01-23T01:07:46.079841825Z" level=info msg="CreateContainer within sandbox \"b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:07:46.097515 containerd[1689]: time="2026-01-23T01:07:46.097492095Z" level=info msg="Container 9f9f9673c4ac446a377d86bd1f626b72bfeaf00b5cdf58192d770ef493b9dd84: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:46.117140 containerd[1689]: time="2026-01-23T01:07:46.117098694Z" level=info msg="CreateContainer within sandbox \"b26bccf246699a3d09a6f4c13dcdab93c473e39696ae1624228d2828f4de1dcb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f9f9673c4ac446a377d86bd1f626b72bfeaf00b5cdf58192d770ef493b9dd84\"" Jan 23 01:07:46.117880 containerd[1689]: time="2026-01-23T01:07:46.117840707Z" level=info msg="StartContainer for \"9f9f9673c4ac446a377d86bd1f626b72bfeaf00b5cdf58192d770ef493b9dd84\"" Jan 23 01:07:46.119292 containerd[1689]: time="2026-01-23T01:07:46.119264909Z" level=info msg="connecting to shim 9f9f9673c4ac446a377d86bd1f626b72bfeaf00b5cdf58192d770ef493b9dd84" address="unix:///run/containerd/s/c92e497ee4b4797b79464cbfdf4f5ce5ea9313cf70b6a535009f2e5e9474fa96" protocol=ttrpc version=3 Jan 23 01:07:46.137251 systemd[1]: Started cri-containerd-9f9f9673c4ac446a377d86bd1f626b72bfeaf00b5cdf58192d770ef493b9dd84.scope - libcontainer container 9f9f9673c4ac446a377d86bd1f626b72bfeaf00b5cdf58192d770ef493b9dd84. Jan 23 01:07:46.169865 containerd[1689]: time="2026-01-23T01:07:46.169464246Z" level=info msg="StartContainer for \"9f9f9673c4ac446a377d86bd1f626b72bfeaf00b5cdf58192d770ef493b9dd84\" returns successfully" Jan 23 01:07:46.228222 containerd[1689]: time="2026-01-23T01:07:46.228204104Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:46.230514 containerd[1689]: time="2026-01-23T01:07:46.230495623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:07:46.230584 containerd[1689]: time="2026-01-23T01:07:46.230567557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:07:46.230722 kubelet[3167]: E0123 01:07:46.230686 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:07:46.230956 kubelet[3167]: E0123 01:07:46.230924 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:07:46.231028 kubelet[3167]: E0123 01:07:46.231004 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tdsjl_calico-system(c928b9b3-da34-4326-8f7b-130857d457b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:46.231055 kubelet[3167]: E0123 01:07:46.231038 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:07:46.417243 systemd-networkd[1477]: calicbbb5cd0176: Gained IPv6LL Jan 23 01:07:46.578055 containerd[1689]: time="2026-01-23T01:07:46.578008453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5db5c6d-685h6,Uid:9a0795c2-7ecf-4504-8071-c68e46a2784c,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:07:46.583296 containerd[1689]: time="2026-01-23T01:07:46.583265471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b5df79c-pfx6f,Uid:8938b6cd-2993-4177-a47a-bf7c96438cfc,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:07:46.589975 containerd[1689]: time="2026-01-23T01:07:46.589896541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kgxwh,Uid:5c478b45-6d16-4dea-9945-700ba45b5350,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:46.726293 systemd-networkd[1477]: cali637883d08fa: Link UP Jan 23 01:07:46.727256 systemd-networkd[1477]: cali637883d08fa: Gained carrier Jan 23 01:07:46.743540 kubelet[3167]: E0123 01:07:46.743359 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.647 [INFO][4994] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0 calico-apiserver-6cb5db5c6d- calico-apiserver 9a0795c2-7ecf-4504-8071-c68e46a2784c 820 0 2026-01-23 01:07:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cb5db5c6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-n-059e17308a calico-apiserver-6cb5db5c6d-685h6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali637883d08fa [] [] }} ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-685h6" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.647 [INFO][4994] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-685h6" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.684 [INFO][5034] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" HandleID="k8s-pod-network.c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.685 [INFO][5034] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" HandleID="k8s-pod-network.c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-n-059e17308a", "pod":"calico-apiserver-6cb5db5c6d-685h6", "timestamp":"2026-01-23 01:07:46.684849651 +0000 UTC"}, Hostname:"ci-4459.2.2-n-059e17308a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.685 [INFO][5034] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.685 [INFO][5034] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.685 [INFO][5034] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-059e17308a' Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.691 [INFO][5034] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.695 [INFO][5034] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.699 [INFO][5034] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.701 [INFO][5034] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.703 [INFO][5034] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.703 [INFO][5034] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.705 [INFO][5034] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364 Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.709 [INFO][5034] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.717 [INFO][5034] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.133/26] block=192.168.42.128/26 handle="k8s-pod-network.c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.717 [INFO][5034] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.133/26] handle="k8s-pod-network.c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.717 [INFO][5034] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:07:46.744298 containerd[1689]: 2026-01-23 01:07:46.717 [INFO][5034] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.133/26] IPv6=[] ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" HandleID="k8s-pod-network.c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0" Jan 23 01:07:46.744809 containerd[1689]: 2026-01-23 01:07:46.719 [INFO][4994] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-685h6" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0", GenerateName:"calico-apiserver-6cb5db5c6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a0795c2-7ecf-4504-8071-c68e46a2784c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb5db5c6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"", Pod:"calico-apiserver-6cb5db5c6d-685h6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali637883d08fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:46.744809 containerd[1689]: 2026-01-23 01:07:46.720 [INFO][4994] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.133/32] ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-685h6" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0" Jan 23 01:07:46.744809 containerd[1689]: 2026-01-23 01:07:46.720 [INFO][4994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali637883d08fa ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-685h6" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0" Jan 23 01:07:46.744809 containerd[1689]: 2026-01-23 01:07:46.727 [INFO][4994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-685h6" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0" Jan 23 01:07:46.744809 containerd[1689]: 2026-01-23 01:07:46.727 [INFO][4994] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-685h6" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0", GenerateName:"calico-apiserver-6cb5db5c6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a0795c2-7ecf-4504-8071-c68e46a2784c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb5db5c6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364", Pod:"calico-apiserver-6cb5db5c6d-685h6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali637883d08fa", MAC:"3a:8a:d8:29:69:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:46.744809 containerd[1689]: 2026-01-23 01:07:46.740 [INFO][4994] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-685h6" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--685h6-eth0" Jan 23 01:07:46.746613 kubelet[3167]: E0123 01:07:46.743359 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:07:46.790464 kubelet[3167]: I0123 01:07:46.790189 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gl4bv" podStartSLOduration=38.790170889 podStartE2EDuration="38.790170889s" podCreationTimestamp="2026-01-23 01:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:46.759002655 +0000 UTC m=+45.278460726" watchObservedRunningTime="2026-01-23 01:07:46.790170889 +0000 UTC m=+45.309628965" Jan 23 01:07:46.795511 containerd[1689]: time="2026-01-23T01:07:46.795438854Z" level=info msg="connecting to shim c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364" address="unix:///run/containerd/s/003023235d81df546aa5def2ade9a65e2401e0d7e5b6306293855071bf3c9268" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:46.838439 systemd[1]: Started cri-containerd-c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364.scope - libcontainer container c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364. Jan 23 01:07:46.872441 systemd-networkd[1477]: calied0da8dbf58: Link UP Jan 23 01:07:46.873387 systemd-networkd[1477]: calied0da8dbf58: Gained carrier Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.649 [INFO][5002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0 calico-apiserver-5b9b5df79c- calico-apiserver 8938b6cd-2993-4177-a47a-bf7c96438cfc 825 0 2026-01-23 01:07:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b9b5df79c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-n-059e17308a calico-apiserver-5b9b5df79c-pfx6f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied0da8dbf58 [] [] }} ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b5df79c-pfx6f" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.649 [INFO][5002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b5df79c-pfx6f" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.693 [INFO][5032] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" HandleID="k8s-pod-network.4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.693 [INFO][5032] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" HandleID="k8s-pod-network.4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d51e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-n-059e17308a", "pod":"calico-apiserver-5b9b5df79c-pfx6f", "timestamp":"2026-01-23 01:07:46.69373653 +0000 UTC"}, Hostname:"ci-4459.2.2-n-059e17308a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.694 [INFO][5032] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.717 [INFO][5032] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.717 [INFO][5032] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-059e17308a' Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.794 [INFO][5032] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.828 [INFO][5032] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.842 [INFO][5032] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.846 [INFO][5032] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.849 [INFO][5032] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.849 [INFO][5032] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.852 [INFO][5032] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.857 [INFO][5032] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.866 [INFO][5032] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.134/26] block=192.168.42.128/26 handle="k8s-pod-network.4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.866 [INFO][5032] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.134/26] handle="k8s-pod-network.4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.866 [INFO][5032] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:07:46.892907 containerd[1689]: 2026-01-23 01:07:46.866 [INFO][5032] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.134/26] IPv6=[] ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" HandleID="k8s-pod-network.4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0" Jan 23 01:07:46.893500 containerd[1689]: 2026-01-23 01:07:46.869 [INFO][5002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b5df79c-pfx6f" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0", GenerateName:"calico-apiserver-5b9b5df79c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8938b6cd-2993-4177-a47a-bf7c96438cfc", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9b5df79c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"", Pod:"calico-apiserver-5b9b5df79c-pfx6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied0da8dbf58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:46.893500 containerd[1689]: 2026-01-23 01:07:46.869 [INFO][5002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.134/32] ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b5df79c-pfx6f" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0" Jan 23 01:07:46.893500 containerd[1689]: 2026-01-23 01:07:46.869 [INFO][5002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied0da8dbf58 ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b5df79c-pfx6f" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0" Jan 23 01:07:46.893500 containerd[1689]: 2026-01-23 01:07:46.873 [INFO][5002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b5df79c-pfx6f" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0" Jan 23 01:07:46.893500 containerd[1689]: 2026-01-23 01:07:46.877 [INFO][5002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b5df79c-pfx6f" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0", GenerateName:"calico-apiserver-5b9b5df79c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8938b6cd-2993-4177-a47a-bf7c96438cfc", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9b5df79c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c", Pod:"calico-apiserver-5b9b5df79c-pfx6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied0da8dbf58", MAC:"22:8a:a8:c7:49:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:46.893500 containerd[1689]: 2026-01-23 01:07:46.888 [INFO][5002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b5df79c-pfx6f" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--5b9b5df79c--pfx6f-eth0" Jan 23 01:07:46.904556 containerd[1689]: time="2026-01-23T01:07:46.904526185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5db5c6d-685h6,Uid:9a0795c2-7ecf-4504-8071-c68e46a2784c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c7933b16a76d0a5b116f16e5066ff83a0519728f99baba284c9fc650eeb40364\"" Jan 23 01:07:46.905857 containerd[1689]: time="2026-01-23T01:07:46.905595250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:07:46.935146 containerd[1689]: time="2026-01-23T01:07:46.934951101Z" level=info msg="connecting to shim 4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c" address="unix:///run/containerd/s/4355a0a23c81b980b07d388563db924bfe529b10005f835c1d439ad4322a4ae5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:46.958289 systemd[1]: Started cri-containerd-4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c.scope - libcontainer container 4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c. Jan 23 01:07:46.965496 systemd-networkd[1477]: caliee361305e95: Link UP Jan 23 01:07:46.966742 systemd-networkd[1477]: caliee361305e95: Gained carrier Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.664 [INFO][5009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0 coredns-66bc5c9577- kube-system 5c478b45-6d16-4dea-9945-700ba45b5350 822 0 2026-01-23 01:07:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-n-059e17308a coredns-66bc5c9577-kgxwh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliee361305e95 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Namespace="kube-system" Pod="coredns-66bc5c9577-kgxwh" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.664 [INFO][5009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Namespace="kube-system" Pod="coredns-66bc5c9577-kgxwh" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.707 [INFO][5044] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" HandleID="k8s-pod-network.237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Workload="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.707 [INFO][5044] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" HandleID="k8s-pod-network.237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Workload="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5060), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-n-059e17308a", "pod":"coredns-66bc5c9577-kgxwh", "timestamp":"2026-01-23 01:07:46.707162368 +0000 UTC"}, Hostname:"ci-4459.2.2-n-059e17308a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.707 [INFO][5044] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.866 [INFO][5044] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.866 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-059e17308a' Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.895 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.923 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.939 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.941 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.943 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.943 [INFO][5044] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.944 [INFO][5044] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051 Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.949 [INFO][5044] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.958 [INFO][5044] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.135/26] block=192.168.42.128/26 handle="k8s-pod-network.237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.958 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.135/26] handle="k8s-pod-network.237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.958 [INFO][5044] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:07:46.985337 containerd[1689]: 2026-01-23 01:07:46.958 [INFO][5044] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.135/26] IPv6=[] ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" HandleID="k8s-pod-network.237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Workload="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0" Jan 23 01:07:46.985919 containerd[1689]: 2026-01-23 01:07:46.961 [INFO][5009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Namespace="kube-system" Pod="coredns-66bc5c9577-kgxwh" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5c478b45-6d16-4dea-9945-700ba45b5350", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"", Pod:"coredns-66bc5c9577-kgxwh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee361305e95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:46.985919 containerd[1689]: 2026-01-23 01:07:46.961 [INFO][5009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.135/32] ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Namespace="kube-system" Pod="coredns-66bc5c9577-kgxwh" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0" Jan 23 01:07:46.985919 containerd[1689]: 2026-01-23 01:07:46.961 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee361305e95 ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Namespace="kube-system" Pod="coredns-66bc5c9577-kgxwh" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0" Jan 23 01:07:46.985919 containerd[1689]: 2026-01-23 01:07:46.967 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Namespace="kube-system" Pod="coredns-66bc5c9577-kgxwh" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0" Jan 23 01:07:46.985919 containerd[1689]: 2026-01-23 01:07:46.967 [INFO][5009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Namespace="kube-system" Pod="coredns-66bc5c9577-kgxwh" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5c478b45-6d16-4dea-9945-700ba45b5350", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051", Pod:"coredns-66bc5c9577-kgxwh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee361305e95", MAC:"d2:cd:97:e4:90:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:46.986560 containerd[1689]: 2026-01-23 01:07:46.983 [INFO][5009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" Namespace="kube-system" Pod="coredns-66bc5c9577-kgxwh" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-coredns--66bc5c9577--kgxwh-eth0" Jan 23 01:07:47.023931 containerd[1689]: time="2026-01-23T01:07:47.023840447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b5df79c-pfx6f,Uid:8938b6cd-2993-4177-a47a-bf7c96438cfc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4980a6f6e852e40eff8c5a1341af1e8c83ce0439e33c16792f3b64e7f183949c\"" Jan 23 01:07:47.031463 containerd[1689]: time="2026-01-23T01:07:47.031405540Z" level=info msg="connecting to shim 237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051" address="unix:///run/containerd/s/120ed465025a94da316c36b8fa1b9724cde6b3894a9f0ef94f6f2826e4ba6c39" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:47.050268 systemd[1]: Started cri-containerd-237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051.scope - libcontainer container 237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051. Jan 23 01:07:47.085411 containerd[1689]: time="2026-01-23T01:07:47.085385228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kgxwh,Uid:5c478b45-6d16-4dea-9945-700ba45b5350,Namespace:kube-system,Attempt:0,} returns sandbox id \"237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051\"" Jan 23 01:07:47.091957 containerd[1689]: time="2026-01-23T01:07:47.091931876Z" level=info msg="CreateContainer within sandbox \"237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:07:47.105630 containerd[1689]: time="2026-01-23T01:07:47.105606362Z" level=info msg="Container 396bb69d255d349824702a37940737dc5c38781455fd4cda0a64532ee2bb3676: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:47.116720 containerd[1689]: time="2026-01-23T01:07:47.116696117Z" level=info msg="CreateContainer within sandbox \"237bc810b995d09807fe6973c815da942c101dae14b5a21ccd0c0993a9a4f051\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"396bb69d255d349824702a37940737dc5c38781455fd4cda0a64532ee2bb3676\"" Jan 23 01:07:47.117230 containerd[1689]: time="2026-01-23T01:07:47.117113041Z" level=info msg="StartContainer for \"396bb69d255d349824702a37940737dc5c38781455fd4cda0a64532ee2bb3676\"" Jan 23 01:07:47.118036 containerd[1689]: time="2026-01-23T01:07:47.118009857Z" level=info msg="connecting to shim 396bb69d255d349824702a37940737dc5c38781455fd4cda0a64532ee2bb3676" address="unix:///run/containerd/s/120ed465025a94da316c36b8fa1b9724cde6b3894a9f0ef94f6f2826e4ba6c39" protocol=ttrpc version=3 Jan 23 01:07:47.121434 systemd-networkd[1477]: calic70f5558dbf: Gained IPv6LL Jan 23 01:07:47.135265 systemd[1]: Started cri-containerd-396bb69d255d349824702a37940737dc5c38781455fd4cda0a64532ee2bb3676.scope - libcontainer container 396bb69d255d349824702a37940737dc5c38781455fd4cda0a64532ee2bb3676. Jan 23 01:07:47.147809 containerd[1689]: time="2026-01-23T01:07:47.147022839Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:47.149623 containerd[1689]: time="2026-01-23T01:07:47.149527169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:07:47.149623 containerd[1689]: time="2026-01-23T01:07:47.149602767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:07:47.150086 kubelet[3167]: E0123 01:07:47.150056 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:07:47.150227 kubelet[3167]: E0123 01:07:47.150095 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:07:47.150254 kubelet[3167]: E0123 01:07:47.150232 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5db5c6d-685h6_calico-apiserver(9a0795c2-7ecf-4504-8071-c68e46a2784c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:47.150291 kubelet[3167]: E0123 01:07:47.150265 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:07:47.150685 containerd[1689]: time="2026-01-23T01:07:47.150582202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:07:47.158735 containerd[1689]: time="2026-01-23T01:07:47.158712462Z" level=info msg="StartContainer for \"396bb69d255d349824702a37940737dc5c38781455fd4cda0a64532ee2bb3676\" returns successfully" Jan 23 01:07:47.388344 containerd[1689]: time="2026-01-23T01:07:47.388221614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:47.390708 containerd[1689]: time="2026-01-23T01:07:47.390679561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:07:47.390815 containerd[1689]: time="2026-01-23T01:07:47.390741683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:07:47.390872 kubelet[3167]: E0123 01:07:47.390828 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:07:47.391373 kubelet[3167]: E0123 01:07:47.390879 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:07:47.391373 kubelet[3167]: E0123 01:07:47.390954 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b9b5df79c-pfx6f_calico-apiserver(8938b6cd-2993-4177-a47a-bf7c96438cfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:47.391373 kubelet[3167]: E0123 01:07:47.390985 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:07:47.441317 systemd-networkd[1477]: cali78dd343cbcc: Gained IPv6LL Jan 23 01:07:47.441920 systemd-networkd[1477]: vxlan.calico: Gained IPv6LL Jan 23 01:07:47.577543 containerd[1689]: time="2026-01-23T01:07:47.577516387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5k7j,Uid:757abb7b-5fcc-4c56-ba6f-f09ed789238a,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:47.581078 containerd[1689]: time="2026-01-23T01:07:47.581018069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5db5c6d-qkg5z,Uid:4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:07:47.699893 systemd-networkd[1477]: caliaadbe8ebda7: Link UP Jan 23 01:07:47.700095 systemd-networkd[1477]: caliaadbe8ebda7: Gained carrier Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.628 [INFO][5256] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0 csi-node-driver- calico-system 757abb7b-5fcc-4c56-ba6f-f09ed789238a 714 0 2026-01-23 01:07:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-n-059e17308a csi-node-driver-c5k7j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaadbe8ebda7 [] [] }} ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Namespace="calico-system" Pod="csi-node-driver-c5k7j" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.628 [INFO][5256] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Namespace="calico-system" Pod="csi-node-driver-c5k7j" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.657 [INFO][5280] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" HandleID="k8s-pod-network.a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Workload="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.657 [INFO][5280] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" HandleID="k8s-pod-network.a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Workload="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-059e17308a", "pod":"csi-node-driver-c5k7j", "timestamp":"2026-01-23 01:07:47.657606262 +0000 UTC"}, Hostname:"ci-4459.2.2-n-059e17308a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.657 [INFO][5280] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.657 [INFO][5280] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.657 [INFO][5280] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-059e17308a' Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.663 [INFO][5280] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.667 [INFO][5280] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.672 [INFO][5280] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.674 [INFO][5280] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.676 [INFO][5280] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.676 [INFO][5280] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.677 [INFO][5280] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452 Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.681 [INFO][5280] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.692 [INFO][5280] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.136/26] block=192.168.42.128/26 handle="k8s-pod-network.a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.692 [INFO][5280] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.136/26] handle="k8s-pod-network.a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.692 [INFO][5280] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:07:47.720681 containerd[1689]: 2026-01-23 01:07:47.692 [INFO][5280] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.136/26] IPv6=[] ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" HandleID="k8s-pod-network.a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Workload="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0" Jan 23 01:07:47.721354 containerd[1689]: 2026-01-23 01:07:47.695 [INFO][5256] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Namespace="calico-system" Pod="csi-node-driver-c5k7j" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"757abb7b-5fcc-4c56-ba6f-f09ed789238a", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"", Pod:"csi-node-driver-c5k7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaadbe8ebda7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:47.721354 containerd[1689]: 2026-01-23 01:07:47.695 [INFO][5256] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.136/32] ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Namespace="calico-system" Pod="csi-node-driver-c5k7j" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0" Jan 23 01:07:47.721354 containerd[1689]: 2026-01-23 01:07:47.695 [INFO][5256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaadbe8ebda7 ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Namespace="calico-system" Pod="csi-node-driver-c5k7j" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0" Jan 23 01:07:47.721354 containerd[1689]: 2026-01-23 01:07:47.701 [INFO][5256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Namespace="calico-system" Pod="csi-node-driver-c5k7j" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0" Jan 23 01:07:47.721354 containerd[1689]: 2026-01-23 01:07:47.701 [INFO][5256] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Namespace="calico-system" Pod="csi-node-driver-c5k7j" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"757abb7b-5fcc-4c56-ba6f-f09ed789238a", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452", Pod:"csi-node-driver-c5k7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaadbe8ebda7", MAC:"e2:f4:c2:21:5a:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:47.721354 containerd[1689]: 2026-01-23 01:07:47.717 [INFO][5256] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" Namespace="calico-system" Pod="csi-node-driver-c5k7j" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-csi--node--driver--c5k7j-eth0" Jan 23 01:07:47.745704 kubelet[3167]: E0123 01:07:47.745649 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:07:47.749454 kubelet[3167]: E0123 01:07:47.749407 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:07:47.750122 kubelet[3167]: E0123 01:07:47.750091 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:07:47.772283 containerd[1689]: time="2026-01-23T01:07:47.772251572Z" level=info msg="connecting to shim a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452" address="unix:///run/containerd/s/02368b1eab1d1e5ea800e973b8c8e6c9ab43c64d83fa25bca5a4cb39df6068d7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:47.801507 kubelet[3167]: I0123 01:07:47.800963 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kgxwh" podStartSLOduration=39.800949587 podStartE2EDuration="39.800949587s" podCreationTimestamp="2026-01-23 01:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:47.768556971 +0000 UTC m=+46.288015037" watchObservedRunningTime="2026-01-23 01:07:47.800949587 +0000 UTC m=+46.320407654" Jan 23 01:07:47.811399 systemd[1]: Started cri-containerd-a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452.scope - libcontainer container a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452. Jan 23 01:07:47.880358 containerd[1689]: time="2026-01-23T01:07:47.880330319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5k7j,Uid:757abb7b-5fcc-4c56-ba6f-f09ed789238a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2188a3b5cb60e7dec460a9835910bd357d9e8dad39bd2125729055c97f46452\"" Jan 23 01:07:47.884953 containerd[1689]: time="2026-01-23T01:07:47.883640726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:07:47.897317 systemd-networkd[1477]: calib9c6e6d02eb: Link UP Jan 23 01:07:47.898360 systemd-networkd[1477]: calib9c6e6d02eb: Gained carrier Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.633 [INFO][5266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0 calico-apiserver-6cb5db5c6d- calico-apiserver 4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f 824 0 2026-01-23 01:07:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cb5db5c6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-n-059e17308a calico-apiserver-6cb5db5c6d-qkg5z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9c6e6d02eb [] [] }} ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-qkg5z" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.633 [INFO][5266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-qkg5z" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.658 [INFO][5282] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" HandleID="k8s-pod-network.2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.658 [INFO][5282] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" HandleID="k8s-pod-network.2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-n-059e17308a", "pod":"calico-apiserver-6cb5db5c6d-qkg5z", "timestamp":"2026-01-23 01:07:47.658647357 +0000 UTC"}, Hostname:"ci-4459.2.2-n-059e17308a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.658 [INFO][5282] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.692 [INFO][5282] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.693 [INFO][5282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-059e17308a' Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.771 [INFO][5282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.792 [INFO][5282] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.827 [INFO][5282] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.832 [INFO][5282] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.847 [INFO][5282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.848 [INFO][5282] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.855 [INFO][5282] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.870 [INFO][5282] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.888 [INFO][5282] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.137/26] block=192.168.42.128/26 handle="k8s-pod-network.2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.888 [INFO][5282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.137/26] handle="k8s-pod-network.2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" host="ci-4459.2.2-n-059e17308a" Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.888 [INFO][5282] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:07:47.919558 containerd[1689]: 2026-01-23 01:07:47.888 [INFO][5282] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.137/26] IPv6=[] ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" HandleID="k8s-pod-network.2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Workload="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0" Jan 23 01:07:47.920062 containerd[1689]: 2026-01-23 01:07:47.890 [INFO][5266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-qkg5z" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0", GenerateName:"calico-apiserver-6cb5db5c6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb5db5c6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"", Pod:"calico-apiserver-6cb5db5c6d-qkg5z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9c6e6d02eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:47.920062 containerd[1689]: 2026-01-23 01:07:47.890 [INFO][5266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.137/32] ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-qkg5z" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0" Jan 23 01:07:47.920062 containerd[1689]: 2026-01-23 01:07:47.891 [INFO][5266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9c6e6d02eb ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-qkg5z" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0" Jan 23 01:07:47.920062 containerd[1689]: 2026-01-23 01:07:47.899 [INFO][5266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-qkg5z" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0" Jan 23 01:07:47.920062 containerd[1689]: 2026-01-23 01:07:47.900 [INFO][5266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-qkg5z" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0", GenerateName:"calico-apiserver-6cb5db5c6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cb5db5c6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-059e17308a", ContainerID:"2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f", Pod:"calico-apiserver-6cb5db5c6d-qkg5z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9c6e6d02eb", MAC:"66:44:f1:26:75:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:47.920062 containerd[1689]: 2026-01-23 01:07:47.913 [INFO][5266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" Namespace="calico-apiserver" Pod="calico-apiserver-6cb5db5c6d-qkg5z" WorkloadEndpoint="ci--4459.2.2--n--059e17308a-k8s-calico--apiserver--6cb5db5c6d--qkg5z-eth0" Jan 23 01:07:47.960199 containerd[1689]: time="2026-01-23T01:07:47.960098503Z" level=info msg="connecting to shim 2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f" address="unix:///run/containerd/s/b5fc828f42a6e0ce4d474838142cfbcbbae279a04cdbe18dd111b64a10903900" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:47.986280 systemd[1]: Started cri-containerd-2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f.scope - libcontainer container 2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f. Jan 23 01:07:48.026500 containerd[1689]: time="2026-01-23T01:07:48.026473441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cb5db5c6d-qkg5z,Uid:4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2636cbd11bdd1fdf6b5ad1db6e69ec55344bcb0eb9a5c4f73a997d301cd1794f\"" Jan 23 01:07:48.129258 containerd[1689]: time="2026-01-23T01:07:48.129226472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:48.131993 containerd[1689]: time="2026-01-23T01:07:48.131970403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:07:48.132047 containerd[1689]: time="2026-01-23T01:07:48.132030359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:07:48.132200 kubelet[3167]: E0123 01:07:48.132168 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:07:48.132276 kubelet[3167]: E0123 01:07:48.132206 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:07:48.132472 kubelet[3167]: E0123 01:07:48.132415 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-c5k7j_calico-system(757abb7b-5fcc-4c56-ba6f-f09ed789238a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:48.132515 containerd[1689]: time="2026-01-23T01:07:48.132432065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:07:48.367625 containerd[1689]: time="2026-01-23T01:07:48.367581193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:48.372254 containerd[1689]: time="2026-01-23T01:07:48.372214243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:07:48.372356 containerd[1689]: time="2026-01-23T01:07:48.372290223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:07:48.372475 kubelet[3167]: E0123 01:07:48.372444 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:07:48.372527 kubelet[3167]: E0123 01:07:48.372488 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:07:48.372753 kubelet[3167]: E0123 01:07:48.372692 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5db5c6d-qkg5z_calico-apiserver(4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:48.372753 kubelet[3167]: E0123 01:07:48.372732 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:07:48.373046 containerd[1689]: time="2026-01-23T01:07:48.373023720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:07:48.401354 systemd-networkd[1477]: cali637883d08fa: Gained IPv6LL Jan 23 01:07:48.465248 systemd-networkd[1477]: caliee361305e95: Gained IPv6LL Jan 23 01:07:48.635760 containerd[1689]: time="2026-01-23T01:07:48.635612798Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:48.637965 containerd[1689]: time="2026-01-23T01:07:48.637934976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:07:48.638097 containerd[1689]: time="2026-01-23T01:07:48.638006204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:07:48.638194 kubelet[3167]: E0123 01:07:48.638153 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:07:48.638437 kubelet[3167]: E0123 01:07:48.638234 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:07:48.638437 kubelet[3167]: E0123 01:07:48.638401 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-c5k7j_calico-system(757abb7b-5fcc-4c56-ba6f-f09ed789238a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:48.638570 kubelet[3167]: E0123 01:07:48.638532 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:07:48.751252 kubelet[3167]: E0123 01:07:48.751201 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:07:48.753614 kubelet[3167]: E0123 01:07:48.753581 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:07:48.753811 kubelet[3167]: E0123 01:07:48.753793 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:07:48.754355 kubelet[3167]: E0123 01:07:48.754332 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:07:48.785254 systemd-networkd[1477]: calied0da8dbf58: Gained IPv6LL Jan 23 01:07:48.977292 systemd-networkd[1477]: caliaadbe8ebda7: Gained IPv6LL Jan 23 01:07:49.298282 systemd-networkd[1477]: calib9c6e6d02eb: Gained IPv6LL Jan 23 01:07:49.755502 kubelet[3167]: E0123 01:07:49.755279 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:07:49.758061 kubelet[3167]: E0123 01:07:49.758000 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:07:52.574622 containerd[1689]: time="2026-01-23T01:07:52.574541015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:07:52.825290 containerd[1689]: time="2026-01-23T01:07:52.825091675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:52.827853 containerd[1689]: time="2026-01-23T01:07:52.827818657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:07:52.827961 containerd[1689]: time="2026-01-23T01:07:52.827837550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:07:52.828017 kubelet[3167]: E0123 01:07:52.827975 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:07:52.828307 kubelet[3167]: E0123 01:07:52.828023 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:07:52.828307 kubelet[3167]: E0123 01:07:52.828087 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8f6bd7b4f-tth7c_calico-system(2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:52.829424 containerd[1689]: time="2026-01-23T01:07:52.829395896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:07:53.080395 containerd[1689]: time="2026-01-23T01:07:53.080313214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:53.082770 containerd[1689]: time="2026-01-23T01:07:53.082742472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:07:53.082869 containerd[1689]: time="2026-01-23T01:07:53.082798920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:07:53.082934 kubelet[3167]: E0123 01:07:53.082889 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:07:53.082934 kubelet[3167]: E0123 01:07:53.082924 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:07:53.083000 kubelet[3167]: E0123 01:07:53.082984 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8f6bd7b4f-tth7c_calico-system(2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:53.083053 kubelet[3167]: E0123 01:07:53.083026 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:07:58.574002 containerd[1689]: time="2026-01-23T01:07:58.573598033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:07:58.814461 containerd[1689]: time="2026-01-23T01:07:58.814319953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:58.817541 containerd[1689]: time="2026-01-23T01:07:58.817446278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:07:58.817541 containerd[1689]: time="2026-01-23T01:07:58.817484999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:07:58.817700 kubelet[3167]: E0123 01:07:58.817671 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:07:58.817951 kubelet[3167]: E0123 01:07:58.817708 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:07:58.817951 kubelet[3167]: E0123 01:07:58.817891 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5db5b8969f-b7ffs_calico-system(464a8745-942a-406e-a6f7-99a7e252e57c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:58.817951 kubelet[3167]: E0123 01:07:58.817925 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:07:58.818481 containerd[1689]: time="2026-01-23T01:07:58.818305145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:07:59.068643 containerd[1689]: time="2026-01-23T01:07:59.068588501Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:59.071282 containerd[1689]: time="2026-01-23T01:07:59.071165955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:07:59.071282 containerd[1689]: time="2026-01-23T01:07:59.071256807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:07:59.071609 kubelet[3167]: E0123 01:07:59.071573 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:07:59.071678 kubelet[3167]: E0123 01:07:59.071621 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:07:59.071794 kubelet[3167]: E0123 01:07:59.071696 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tdsjl_calico-system(c928b9b3-da34-4326-8f7b-130857d457b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:59.071794 kubelet[3167]: E0123 01:07:59.071730 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:08:00.573967 containerd[1689]: time="2026-01-23T01:08:00.573915632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:00.831236 containerd[1689]: time="2026-01-23T01:08:00.830982764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:00.833394 containerd[1689]: time="2026-01-23T01:08:00.833360329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:00.833457 containerd[1689]: time="2026-01-23T01:08:00.833425178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:00.833580 kubelet[3167]: E0123 01:08:00.833547 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:00.834244 kubelet[3167]: E0123 01:08:00.833603 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:00.834244 kubelet[3167]: E0123 01:08:00.833727 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5db5c6d-685h6_calico-apiserver(9a0795c2-7ecf-4504-8071-c68e46a2784c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:00.834244 kubelet[3167]: E0123 01:08:00.833855 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:08:00.834401 containerd[1689]: time="2026-01-23T01:08:00.833935743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:01.073980 containerd[1689]: time="2026-01-23T01:08:01.073944845Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:01.076560 containerd[1689]: time="2026-01-23T01:08:01.076525612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:01.076672 containerd[1689]: time="2026-01-23T01:08:01.076530921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:01.076845 kubelet[3167]: E0123 01:08:01.076792 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:01.076907 kubelet[3167]: E0123 01:08:01.076853 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:01.076945 kubelet[3167]: E0123 01:08:01.076930 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5db5c6d-qkg5z_calico-apiserver(4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:01.077112 kubelet[3167]: E0123 01:08:01.076967 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:08:01.573816 containerd[1689]: time="2026-01-23T01:08:01.573765773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:01.809722 containerd[1689]: time="2026-01-23T01:08:01.809672067Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:01.812037 containerd[1689]: time="2026-01-23T01:08:01.812013452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:01.812101 containerd[1689]: time="2026-01-23T01:08:01.812065679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:01.812200 kubelet[3167]: E0123 01:08:01.812172 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:01.812266 kubelet[3167]: E0123 01:08:01.812205 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:01.812354 kubelet[3167]: E0123 01:08:01.812337 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b9b5df79c-pfx6f_calico-apiserver(8938b6cd-2993-4177-a47a-bf7c96438cfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:01.812390 kubelet[3167]: E0123 01:08:01.812369 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:08:01.812764 containerd[1689]: time="2026-01-23T01:08:01.812742451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:08:02.053914 containerd[1689]: time="2026-01-23T01:08:02.053864861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:02.056416 containerd[1689]: time="2026-01-23T01:08:02.056383316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:08:02.056486 containerd[1689]: time="2026-01-23T01:08:02.056390975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:08:02.056606 kubelet[3167]: E0123 01:08:02.056576 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:02.056857 kubelet[3167]: E0123 01:08:02.056611 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:02.056857 kubelet[3167]: E0123 01:08:02.056686 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-c5k7j_calico-system(757abb7b-5fcc-4c56-ba6f-f09ed789238a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:02.057680 containerd[1689]: time="2026-01-23T01:08:02.057640264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:08:02.297123 containerd[1689]: time="2026-01-23T01:08:02.297076556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:02.299565 containerd[1689]: time="2026-01-23T01:08:02.299534404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:08:02.299612 containerd[1689]: time="2026-01-23T01:08:02.299587829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:08:02.299725 kubelet[3167]: E0123 01:08:02.299681 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:02.299725 kubelet[3167]: E0123 01:08:02.299718 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:02.299803 kubelet[3167]: E0123 01:08:02.299784 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-c5k7j_calico-system(757abb7b-5fcc-4c56-ba6f-f09ed789238a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:02.299886 kubelet[3167]: E0123 01:08:02.299825 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:08:04.088665 waagent[1878]: 2026-01-23T01:08:04.088622Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 01:08:04.096416 waagent[1878]: 2026-01-23T01:08:04.096372Z INFO ExtHandler Jan 23 01:08:04.096519 waagent[1878]: 2026-01-23T01:08:04.096471Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9b1c3ca0-bb90-4e45-af10-363e8a879dd9 eTag: 4160900562602875968 source: Fabric] Jan 23 01:08:04.096758 waagent[1878]: 2026-01-23T01:08:04.096728Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 01:08:04.097168 waagent[1878]: 2026-01-23T01:08:04.097118Z INFO ExtHandler Jan 23 01:08:04.097229 waagent[1878]: 2026-01-23T01:08:04.097185Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 01:08:04.102673 waagent[1878]: 2026-01-23T01:08:04.102642Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 01:08:04.160969 waagent[1878]: 2026-01-23T01:08:04.160920Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A6D2033A87649556DDA588F7BB91E40CE37D9388', 'hasPrivateKey': True} Jan 23 01:08:04.161345 waagent[1878]: 2026-01-23T01:08:04.161314Z INFO ExtHandler Fetch goal state completed Jan 23 01:08:04.161595 waagent[1878]: 2026-01-23T01:08:04.161571Z INFO ExtHandler ExtHandler Jan 23 01:08:04.161633 waagent[1878]: 2026-01-23T01:08:04.161619Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 03574a97-36ca-4aad-b5a5-07793f08381a correlation 2b066935-e308-4336-901e-af1739723e29 created: 2026-01-23T01:07:57.656113Z] Jan 23 01:08:04.161817 waagent[1878]: 2026-01-23T01:08:04.161795Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 01:08:04.162203 waagent[1878]: 2026-01-23T01:08:04.162179Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 01:08:04.574435 kubelet[3167]: E0123 01:08:04.574358 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:08:10.573572 kubelet[3167]: E0123 01:08:10.573262 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:08:11.575629 kubelet[3167]: E0123 01:08:11.575563 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:08:12.573859 kubelet[3167]: E0123 01:08:12.573814 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:08:13.574624 kubelet[3167]: E0123 01:08:13.574502 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:08:14.574649 kubelet[3167]: E0123 01:08:14.574568 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:08:15.576285 kubelet[3167]: E0123 01:08:15.576169 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:08:15.578105 containerd[1689]: time="2026-01-23T01:08:15.576403582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:08:15.828700 containerd[1689]: time="2026-01-23T01:08:15.828552969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:15.832232 containerd[1689]: time="2026-01-23T01:08:15.832188752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:08:15.832337 containerd[1689]: time="2026-01-23T01:08:15.832273983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:08:15.832506 kubelet[3167]: E0123 01:08:15.832443 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:08:15.832555 kubelet[3167]: E0123 01:08:15.832517 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:08:15.832602 kubelet[3167]: E0123 01:08:15.832587 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8f6bd7b4f-tth7c_calico-system(2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:15.834335 containerd[1689]: time="2026-01-23T01:08:15.834287270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:08:16.082350 containerd[1689]: time="2026-01-23T01:08:16.082094963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:16.084900 containerd[1689]: time="2026-01-23T01:08:16.084819073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:08:16.084900 containerd[1689]: time="2026-01-23T01:08:16.084844751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:08:16.085059 kubelet[3167]: E0123 01:08:16.085024 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:08:16.085105 kubelet[3167]: E0123 01:08:16.085069 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:08:16.085180 kubelet[3167]: E0123 01:08:16.085160 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8f6bd7b4f-tth7c_calico-system(2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:16.085231 kubelet[3167]: E0123 01:08:16.085205 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:08:23.575641 containerd[1689]: time="2026-01-23T01:08:23.575595086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:08:23.822223 containerd[1689]: time="2026-01-23T01:08:23.822183497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:23.824709 containerd[1689]: time="2026-01-23T01:08:23.824647868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:08:23.824810 containerd[1689]: time="2026-01-23T01:08:23.824736652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:23.824906 kubelet[3167]: E0123 01:08:23.824863 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:08:23.825197 kubelet[3167]: E0123 01:08:23.824915 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:08:23.825197 kubelet[3167]: E0123 01:08:23.825007 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tdsjl_calico-system(c928b9b3-da34-4326-8f7b-130857d457b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:23.825197 kubelet[3167]: E0123 01:08:23.825153 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:08:25.576036 containerd[1689]: time="2026-01-23T01:08:25.575780047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:08:25.856330 containerd[1689]: time="2026-01-23T01:08:25.856179168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:25.859183 containerd[1689]: time="2026-01-23T01:08:25.859150167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:08:25.859251 containerd[1689]: time="2026-01-23T01:08:25.859215529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:08:25.859410 kubelet[3167]: E0123 01:08:25.859340 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:08:25.860056 kubelet[3167]: E0123 01:08:25.859418 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:08:25.860056 kubelet[3167]: E0123 01:08:25.859715 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5db5b8969f-b7ffs_calico-system(464a8745-942a-406e-a6f7-99a7e252e57c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:25.860056 kubelet[3167]: E0123 01:08:25.859749 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:08:25.860243 containerd[1689]: time="2026-01-23T01:08:25.859652719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:08:26.102100 containerd[1689]: time="2026-01-23T01:08:26.101827234Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:26.105139 containerd[1689]: time="2026-01-23T01:08:26.105096540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:08:26.105324 containerd[1689]: time="2026-01-23T01:08:26.105152793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:08:26.105455 kubelet[3167]: E0123 01:08:26.105427 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:26.105500 kubelet[3167]: E0123 01:08:26.105467 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:26.105688 kubelet[3167]: E0123 01:08:26.105647 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-c5k7j_calico-system(757abb7b-5fcc-4c56-ba6f-f09ed789238a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:26.105788 containerd[1689]: time="2026-01-23T01:08:26.105772764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:26.352388 containerd[1689]: time="2026-01-23T01:08:26.352350450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:26.354672 containerd[1689]: time="2026-01-23T01:08:26.354641031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:26.354752 containerd[1689]: time="2026-01-23T01:08:26.354708563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:26.356460 kubelet[3167]: E0123 01:08:26.356252 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:26.356460 kubelet[3167]: E0123 01:08:26.356293 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:26.357543 containerd[1689]: time="2026-01-23T01:08:26.356567661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:26.357710 kubelet[3167]: E0123 01:08:26.357692 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5db5c6d-685h6_calico-apiserver(9a0795c2-7ecf-4504-8071-c68e46a2784c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:26.357795 kubelet[3167]: E0123 01:08:26.357780 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:08:26.602157 containerd[1689]: time="2026-01-23T01:08:26.602007431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:26.604710 containerd[1689]: time="2026-01-23T01:08:26.604562328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:26.605228 containerd[1689]: time="2026-01-23T01:08:26.605086406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:26.605783 kubelet[3167]: E0123 01:08:26.605454 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:26.605783 kubelet[3167]: E0123 01:08:26.605489 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:26.605783 kubelet[3167]: E0123 01:08:26.605639 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b9b5df79c-pfx6f_calico-apiserver(8938b6cd-2993-4177-a47a-bf7c96438cfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:26.605783 kubelet[3167]: E0123 01:08:26.605673 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:08:26.606716 containerd[1689]: time="2026-01-23T01:08:26.606397730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:08:26.865008 containerd[1689]: time="2026-01-23T01:08:26.864872237Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:26.867773 containerd[1689]: time="2026-01-23T01:08:26.867586124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:08:26.867773 containerd[1689]: time="2026-01-23T01:08:26.867680114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:08:26.868449 kubelet[3167]: E0123 01:08:26.868408 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:26.868763 kubelet[3167]: E0123 01:08:26.868461 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:26.868763 kubelet[3167]: E0123 01:08:26.868641 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-c5k7j_calico-system(757abb7b-5fcc-4c56-ba6f-f09ed789238a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:26.868763 kubelet[3167]: E0123 01:08:26.868681 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:08:26.869174 containerd[1689]: time="2026-01-23T01:08:26.869150448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:27.116061 containerd[1689]: time="2026-01-23T01:08:27.115646622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:27.118463 containerd[1689]: time="2026-01-23T01:08:27.118400268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:27.118463 containerd[1689]: time="2026-01-23T01:08:27.118446250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:27.118636 kubelet[3167]: E0123 01:08:27.118599 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:27.118677 kubelet[3167]: E0123 01:08:27.118648 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:27.118739 kubelet[3167]: E0123 01:08:27.118722 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5db5c6d-qkg5z_calico-apiserver(4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:27.118791 kubelet[3167]: E0123 01:08:27.118760 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:08:27.577529 kubelet[3167]: E0123 01:08:27.577418 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:08:36.574921 kubelet[3167]: E0123 01:08:36.574862 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:08:37.576149 kubelet[3167]: E0123 01:08:37.576060 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:08:37.576988 kubelet[3167]: E0123 01:08:37.576689 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:08:37.576988 kubelet[3167]: E0123 01:08:37.576691 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:08:38.574175 kubelet[3167]: E0123 01:08:38.574096 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:08:40.573584 kubelet[3167]: E0123 01:08:40.573529 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:08:41.577310 kubelet[3167]: E0123 01:08:41.577222 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:08:47.577785 kubelet[3167]: E0123 01:08:47.576341 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:08:48.573967 kubelet[3167]: E0123 01:08:48.573919 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:08:48.575152 kubelet[3167]: E0123 01:08:48.574321 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:08:49.575503 kubelet[3167]: E0123 01:08:49.574968 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:08:50.575835 kubelet[3167]: E0123 01:08:50.575111 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:08:54.573428 kubelet[3167]: E0123 01:08:54.573381 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:08:56.574439 containerd[1689]: time="2026-01-23T01:08:56.574395720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:08:56.870959 containerd[1689]: time="2026-01-23T01:08:56.870846564Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:56.873378 containerd[1689]: time="2026-01-23T01:08:56.873340941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:08:56.873471 containerd[1689]: time="2026-01-23T01:08:56.873353197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:08:56.873603 kubelet[3167]: E0123 01:08:56.873540 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:08:56.873857 kubelet[3167]: E0123 01:08:56.873611 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:08:56.874146 kubelet[3167]: E0123 01:08:56.873703 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8f6bd7b4f-tth7c_calico-system(2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:56.875005 containerd[1689]: time="2026-01-23T01:08:56.874936824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:08:57.122864 containerd[1689]: time="2026-01-23T01:08:57.122502900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:57.125050 containerd[1689]: time="2026-01-23T01:08:57.124998793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:08:57.125302 containerd[1689]: time="2026-01-23T01:08:57.125189420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:08:57.126211 kubelet[3167]: E0123 01:08:57.125444 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:08:57.126211 kubelet[3167]: E0123 01:08:57.125487 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:08:57.126211 kubelet[3167]: E0123 01:08:57.125562 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8f6bd7b4f-tth7c_calico-system(2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:57.126336 kubelet[3167]: E0123 01:08:57.125600 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:09:00.574027 kubelet[3167]: E0123 01:09:00.573644 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:09:01.579796 kubelet[3167]: E0123 01:09:01.578745 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:09:01.583870 kubelet[3167]: E0123 01:09:01.583832 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:09:03.578206 kubelet[3167]: E0123 01:09:03.577260 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:09:04.574650 kubelet[3167]: E0123 01:09:04.574597 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:09:07.575789 containerd[1689]: time="2026-01-23T01:09:07.575471348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:09:07.576506 kubelet[3167]: E0123 01:09:07.576426 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:09:07.819552 containerd[1689]: time="2026-01-23T01:09:07.819424546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:08.176184 containerd[1689]: time="2026-01-23T01:09:08.176100946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:09:08.177272 containerd[1689]: time="2026-01-23T01:09:08.176172487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:09:08.177443 kubelet[3167]: E0123 01:09:08.177415 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:09:08.177911 kubelet[3167]: E0123 01:09:08.177531 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:09:08.177911 kubelet[3167]: E0123 01:09:08.177601 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b9b5df79c-pfx6f_calico-apiserver(8938b6cd-2993-4177-a47a-bf7c96438cfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:08.177911 kubelet[3167]: E0123 01:09:08.177634 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:09:13.575900 containerd[1689]: time="2026-01-23T01:09:13.575824128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:09:13.836754 containerd[1689]: time="2026-01-23T01:09:13.836624785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:13.839504 containerd[1689]: time="2026-01-23T01:09:13.839442519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:09:13.839504 containerd[1689]: time="2026-01-23T01:09:13.839478076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:09:13.839644 kubelet[3167]: E0123 01:09:13.839606 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:09:13.839881 kubelet[3167]: E0123 01:09:13.839641 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:09:13.839881 kubelet[3167]: E0123 01:09:13.839708 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5db5c6d-685h6_calico-apiserver(9a0795c2-7ecf-4504-8071-c68e46a2784c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:13.839881 kubelet[3167]: E0123 01:09:13.839740 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:09:15.574982 containerd[1689]: time="2026-01-23T01:09:15.574879931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:09:15.832153 containerd[1689]: time="2026-01-23T01:09:15.831922830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:15.834269 containerd[1689]: time="2026-01-23T01:09:15.834235842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:09:15.834349 containerd[1689]: time="2026-01-23T01:09:15.834300446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:09:15.834475 kubelet[3167]: E0123 01:09:15.834423 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:09:15.834729 kubelet[3167]: E0123 01:09:15.834484 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:09:15.834729 kubelet[3167]: E0123 01:09:15.834662 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tdsjl_calico-system(c928b9b3-da34-4326-8f7b-130857d457b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:15.834729 kubelet[3167]: E0123 01:09:15.834693 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:09:15.835221 containerd[1689]: time="2026-01-23T01:09:15.834923683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:09:16.085736 containerd[1689]: time="2026-01-23T01:09:16.085625911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:16.088201 containerd[1689]: time="2026-01-23T01:09:16.088168306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:09:16.088269 containerd[1689]: time="2026-01-23T01:09:16.088243235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:09:16.088394 kubelet[3167]: E0123 01:09:16.088350 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:09:16.088434 kubelet[3167]: E0123 01:09:16.088388 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:09:16.088484 kubelet[3167]: E0123 01:09:16.088460 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5db5b8969f-b7ffs_calico-system(464a8745-942a-406e-a6f7-99a7e252e57c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:16.088570 kubelet[3167]: E0123 01:09:16.088497 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:09:18.573729 containerd[1689]: time="2026-01-23T01:09:18.573505944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:09:18.826058 containerd[1689]: time="2026-01-23T01:09:18.825948535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:18.828643 containerd[1689]: time="2026-01-23T01:09:18.828612264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:09:18.828713 containerd[1689]: time="2026-01-23T01:09:18.828686923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:09:18.828870 kubelet[3167]: E0123 01:09:18.828836 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:09:18.829170 kubelet[3167]: E0123 01:09:18.828879 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:09:18.829170 kubelet[3167]: E0123 01:09:18.828953 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6cb5db5c6d-qkg5z_calico-apiserver(4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:18.829170 kubelet[3167]: E0123 01:09:18.828987 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:09:19.418657 systemd[1]: Started sshd@7-10.200.8.21:22-10.200.16.10:38742.service - OpenSSH per-connection server daemon (10.200.16.10:38742). Jan 23 01:09:19.575550 containerd[1689]: time="2026-01-23T01:09:19.575250550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:09:19.820037 containerd[1689]: time="2026-01-23T01:09:19.819906004Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:19.822814 containerd[1689]: time="2026-01-23T01:09:19.822716846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:09:19.822814 containerd[1689]: time="2026-01-23T01:09:19.822793893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:09:19.823142 kubelet[3167]: E0123 01:09:19.823051 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:09:19.823594 kubelet[3167]: E0123 01:09:19.823107 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:09:19.823594 kubelet[3167]: E0123 01:09:19.823278 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-c5k7j_calico-system(757abb7b-5fcc-4c56-ba6f-f09ed789238a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:19.824499 containerd[1689]: time="2026-01-23T01:09:19.824480303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:09:20.054963 containerd[1689]: time="2026-01-23T01:09:20.054931436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:20.057480 containerd[1689]: time="2026-01-23T01:09:20.057441836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:09:20.057584 containerd[1689]: time="2026-01-23T01:09:20.057515542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:09:20.058017 kubelet[3167]: E0123 01:09:20.057977 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:09:20.059075 kubelet[3167]: E0123 01:09:20.058335 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:09:20.059075 kubelet[3167]: E0123 01:09:20.058441 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-c5k7j_calico-system(757abb7b-5fcc-4c56-ba6f-f09ed789238a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:20.059313 kubelet[3167]: E0123 01:09:20.059281 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:09:20.120283 sshd[5562]: Accepted publickey for core from 10.200.16.10 port 38742 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:20.121620 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:20.125772 systemd-logind[1676]: New session 10 of user core. Jan 23 01:09:20.131282 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:09:20.574695 kubelet[3167]: E0123 01:09:20.574060 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:09:20.659907 sshd[5565]: Connection closed by 10.200.16.10 port 38742 Jan 23 01:09:20.660448 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:20.668656 systemd-logind[1676]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:09:20.669397 systemd[1]: sshd@7-10.200.8.21:22-10.200.16.10:38742.service: Deactivated successfully. Jan 23 01:09:20.672666 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:09:20.676537 systemd-logind[1676]: Removed session 10. Jan 23 01:09:22.573977 kubelet[3167]: E0123 01:09:22.573932 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:09:25.782766 systemd[1]: Started sshd@8-10.200.8.21:22-10.200.16.10:60686.service - OpenSSH per-connection server daemon (10.200.16.10:60686). Jan 23 01:09:26.465985 sshd[5584]: Accepted publickey for core from 10.200.16.10 port 60686 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:26.467100 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:26.470723 systemd-logind[1676]: New session 11 of user core. Jan 23 01:09:26.479255 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:09:26.574379 kubelet[3167]: E0123 01:09:26.574316 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:09:27.031527 sshd[5587]: Connection closed by 10.200.16.10 port 60686 Jan 23 01:09:27.032193 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:27.035755 systemd[1]: sshd@8-10.200.8.21:22-10.200.16.10:60686.service: Deactivated successfully. Jan 23 01:09:27.037489 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:09:27.038637 systemd-logind[1676]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:09:27.039380 systemd-logind[1676]: Removed session 11. Jan 23 01:09:28.574026 kubelet[3167]: E0123 01:09:28.573984 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:09:31.574659 kubelet[3167]: E0123 01:09:31.574326 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:09:31.575711 kubelet[3167]: E0123 01:09:31.574709 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:09:31.576017 kubelet[3167]: E0123 01:09:31.575939 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:09:32.152415 systemd[1]: Started sshd@9-10.200.8.21:22-10.200.16.10:47608.service - OpenSSH per-connection server daemon (10.200.16.10:47608). Jan 23 01:09:32.574230 kubelet[3167]: E0123 01:09:32.574184 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:09:32.840566 sshd[5599]: Accepted publickey for core from 10.200.16.10 port 47608 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:32.841463 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:32.848190 systemd-logind[1676]: New session 12 of user core. Jan 23 01:09:32.854311 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:09:33.360378 sshd[5602]: Connection closed by 10.200.16.10 port 47608 Jan 23 01:09:33.360974 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:33.364119 systemd[1]: sshd@9-10.200.8.21:22-10.200.16.10:47608.service: Deactivated successfully. Jan 23 01:09:33.366060 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:09:33.367167 systemd-logind[1676]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:09:33.368050 systemd-logind[1676]: Removed session 12. Jan 23 01:09:33.484108 systemd[1]: Started sshd@10-10.200.8.21:22-10.200.16.10:47618.service - OpenSSH per-connection server daemon (10.200.16.10:47618). Jan 23 01:09:34.171571 sshd[5615]: Accepted publickey for core from 10.200.16.10 port 47618 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:34.172819 sshd-session[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:34.177191 systemd-logind[1676]: New session 13 of user core. Jan 23 01:09:34.182419 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:09:34.717361 sshd[5618]: Connection closed by 10.200.16.10 port 47618 Jan 23 01:09:34.720143 sshd-session[5615]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:34.723075 systemd[1]: sshd@10-10.200.8.21:22-10.200.16.10:47618.service: Deactivated successfully. Jan 23 01:09:34.724825 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:09:34.726281 systemd-logind[1676]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:09:34.727102 systemd-logind[1676]: Removed session 13. Jan 23 01:09:34.837687 systemd[1]: Started sshd@11-10.200.8.21:22-10.200.16.10:47634.service - OpenSSH per-connection server daemon (10.200.16.10:47634). Jan 23 01:09:35.513749 sshd[5628]: Accepted publickey for core from 10.200.16.10 port 47634 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:35.515090 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:35.519363 systemd-logind[1676]: New session 14 of user core. Jan 23 01:09:35.526278 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:09:36.051909 sshd[5636]: Connection closed by 10.200.16.10 port 47634 Jan 23 01:09:36.053281 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:36.056241 systemd[1]: sshd@11-10.200.8.21:22-10.200.16.10:47634.service: Deactivated successfully. Jan 23 01:09:36.058263 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:09:36.059116 systemd-logind[1676]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:09:36.060061 systemd-logind[1676]: Removed session 14. Jan 23 01:09:37.574147 kubelet[3167]: E0123 01:09:37.573977 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:09:39.574361 kubelet[3167]: E0123 01:09:39.574251 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:09:40.574585 kubelet[3167]: E0123 01:09:40.574509 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:09:41.177300 systemd[1]: Started sshd@12-10.200.8.21:22-10.200.16.10:33764.service - OpenSSH per-connection server daemon (10.200.16.10:33764). Jan 23 01:09:41.853628 sshd[5672]: Accepted publickey for core from 10.200.16.10 port 33764 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:41.854618 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:41.860361 systemd-logind[1676]: New session 15 of user core. Jan 23 01:09:41.864283 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:09:42.399759 sshd[5675]: Connection closed by 10.200.16.10 port 33764 Jan 23 01:09:42.400310 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:42.403278 systemd[1]: sshd@12-10.200.8.21:22-10.200.16.10:33764.service: Deactivated successfully. Jan 23 01:09:42.404983 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:09:42.405724 systemd-logind[1676]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:09:42.406850 systemd-logind[1676]: Removed session 15. Jan 23 01:09:44.574271 kubelet[3167]: E0123 01:09:44.573964 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:09:44.575543 kubelet[3167]: E0123 01:09:44.575470 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:09:45.576420 kubelet[3167]: E0123 01:09:45.575992 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:09:46.575225 kubelet[3167]: E0123 01:09:46.575177 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:09:47.518289 systemd[1]: Started sshd@13-10.200.8.21:22-10.200.16.10:33778.service - OpenSSH per-connection server daemon (10.200.16.10:33778). Jan 23 01:09:48.193242 sshd[5688]: Accepted publickey for core from 10.200.16.10 port 33778 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:48.194955 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:48.201664 systemd-logind[1676]: New session 16 of user core. Jan 23 01:09:48.207288 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:09:48.732540 sshd[5691]: Connection closed by 10.200.16.10 port 33778 Jan 23 01:09:48.733853 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:48.737385 systemd[1]: sshd@13-10.200.8.21:22-10.200.16.10:33778.service: Deactivated successfully. Jan 23 01:09:48.739877 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:09:48.741631 systemd-logind[1676]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:09:48.742505 systemd-logind[1676]: Removed session 16. Jan 23 01:09:49.575320 kubelet[3167]: E0123 01:09:49.575269 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:09:52.574899 kubelet[3167]: E0123 01:09:52.574588 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:09:53.853939 systemd[1]: Started sshd@14-10.200.8.21:22-10.200.16.10:56210.service - OpenSSH per-connection server daemon (10.200.16.10:56210). Jan 23 01:09:54.537513 sshd[5704]: Accepted publickey for core from 10.200.16.10 port 56210 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:54.538518 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:54.542178 systemd-logind[1676]: New session 17 of user core. Jan 23 01:09:54.549516 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:09:55.087918 sshd[5707]: Connection closed by 10.200.16.10 port 56210 Jan 23 01:09:55.090148 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:55.095755 systemd-logind[1676]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:09:55.096923 systemd[1]: sshd@14-10.200.8.21:22-10.200.16.10:56210.service: Deactivated successfully. Jan 23 01:09:55.099071 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:09:55.102248 systemd-logind[1676]: Removed session 17. Jan 23 01:09:55.206777 systemd[1]: Started sshd@15-10.200.8.21:22-10.200.16.10:56222.service - OpenSSH per-connection server daemon (10.200.16.10:56222). Jan 23 01:09:55.575144 kubelet[3167]: E0123 01:09:55.575094 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:09:55.897058 sshd[5719]: Accepted publickey for core from 10.200.16.10 port 56222 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:55.898978 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:55.903963 systemd-logind[1676]: New session 18 of user core. Jan 23 01:09:55.911310 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:09:56.478153 sshd[5722]: Connection closed by 10.200.16.10 port 56222 Jan 23 01:09:56.478904 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:56.484095 systemd-logind[1676]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:09:56.484629 systemd[1]: sshd@15-10.200.8.21:22-10.200.16.10:56222.service: Deactivated successfully. Jan 23 01:09:56.488962 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:09:56.492350 systemd-logind[1676]: Removed session 18. Jan 23 01:09:56.605242 systemd[1]: Started sshd@16-10.200.8.21:22-10.200.16.10:56224.service - OpenSSH per-connection server daemon (10.200.16.10:56224). Jan 23 01:09:57.288079 sshd[5732]: Accepted publickey for core from 10.200.16.10 port 56224 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:57.288967 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:57.294537 systemd-logind[1676]: New session 19 of user core. Jan 23 01:09:57.301292 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:09:58.573929 kubelet[3167]: E0123 01:09:58.573726 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:09:58.573929 kubelet[3167]: E0123 01:09:58.573784 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:09:58.582580 sshd[5735]: Connection closed by 10.200.16.10 port 56224 Jan 23 01:09:58.582876 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:58.587729 systemd[1]: sshd@16-10.200.8.21:22-10.200.16.10:56224.service: Deactivated successfully. Jan 23 01:09:58.589509 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:09:58.590366 systemd-logind[1676]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:09:58.591449 systemd-logind[1676]: Removed session 19. Jan 23 01:09:58.707059 systemd[1]: Started sshd@17-10.200.8.21:22-10.200.16.10:56226.service - OpenSSH per-connection server daemon (10.200.16.10:56226). Jan 23 01:09:59.400154 sshd[5749]: Accepted publickey for core from 10.200.16.10 port 56226 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:09:59.401367 sshd-session[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:59.409248 systemd-logind[1676]: New session 20 of user core. Jan 23 01:09:59.414284 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:09:59.579777 kubelet[3167]: E0123 01:09:59.579735 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:10:00.062541 sshd[5752]: Connection closed by 10.200.16.10 port 56226 Jan 23 01:10:00.063304 sshd-session[5749]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:00.066154 systemd[1]: sshd@17-10.200.8.21:22-10.200.16.10:56226.service: Deactivated successfully. Jan 23 01:10:00.067945 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:10:00.069980 systemd-logind[1676]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:10:00.070865 systemd-logind[1676]: Removed session 20. Jan 23 01:10:00.183922 systemd[1]: Started sshd@18-10.200.8.21:22-10.200.16.10:48704.service - OpenSSH per-connection server daemon (10.200.16.10:48704). Jan 23 01:10:00.574632 kubelet[3167]: E0123 01:10:00.574264 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:10:00.867953 sshd[5764]: Accepted publickey for core from 10.200.16.10 port 48704 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:10:00.869176 sshd-session[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:00.873188 systemd-logind[1676]: New session 21 of user core. Jan 23 01:10:00.878274 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:10:01.469735 sshd[5767]: Connection closed by 10.200.16.10 port 48704 Jan 23 01:10:01.470322 sshd-session[5764]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:01.475536 systemd[1]: sshd@18-10.200.8.21:22-10.200.16.10:48704.service: Deactivated successfully. Jan 23 01:10:01.475985 systemd-logind[1676]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:10:01.478988 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:10:01.482795 systemd-logind[1676]: Removed session 21. Jan 23 01:10:03.574228 kubelet[3167]: E0123 01:10:03.574174 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:10:06.591327 systemd[1]: Started sshd@19-10.200.8.21:22-10.200.16.10:48718.service - OpenSSH per-connection server daemon (10.200.16.10:48718). Jan 23 01:10:07.271372 sshd[5783]: Accepted publickey for core from 10.200.16.10 port 48718 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:10:07.272571 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:07.277095 systemd-logind[1676]: New session 22 of user core. Jan 23 01:10:07.280283 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:10:07.574544 kubelet[3167]: E0123 01:10:07.574329 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:10:07.576742 kubelet[3167]: E0123 01:10:07.576710 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:10:07.807868 sshd[5786]: Connection closed by 10.200.16.10 port 48718 Jan 23 01:10:07.811297 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:07.814439 systemd-logind[1676]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:10:07.816534 systemd[1]: sshd@19-10.200.8.21:22-10.200.16.10:48718.service: Deactivated successfully. Jan 23 01:10:07.820385 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:10:07.825870 systemd-logind[1676]: Removed session 22. Jan 23 01:10:10.573621 kubelet[3167]: E0123 01:10:10.573570 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:10:11.573969 kubelet[3167]: E0123 01:10:11.573925 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:10:12.573842 kubelet[3167]: E0123 01:10:12.573766 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:10:12.926916 systemd[1]: Started sshd@20-10.200.8.21:22-10.200.16.10:37744.service - OpenSSH per-connection server daemon (10.200.16.10:37744). Jan 23 01:10:13.574888 kubelet[3167]: E0123 01:10:13.574805 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:10:13.605440 sshd[5822]: Accepted publickey for core from 10.200.16.10 port 37744 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:10:13.605600 sshd-session[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:13.610466 systemd-logind[1676]: New session 23 of user core. Jan 23 01:10:13.619267 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:10:14.144452 sshd[5825]: Connection closed by 10.200.16.10 port 37744 Jan 23 01:10:14.146319 sshd-session[5822]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:14.151461 systemd[1]: sshd@20-10.200.8.21:22-10.200.16.10:37744.service: Deactivated successfully. Jan 23 01:10:14.154887 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:10:14.157072 systemd-logind[1676]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:10:14.159779 systemd-logind[1676]: Removed session 23. Jan 23 01:10:15.574801 kubelet[3167]: E0123 01:10:15.574757 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc" Jan 23 01:10:19.260966 systemd[1]: Started sshd@21-10.200.8.21:22-10.200.16.10:37752.service - OpenSSH per-connection server daemon (10.200.16.10:37752). Jan 23 01:10:19.575057 kubelet[3167]: E0123 01:10:19.574578 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-685h6" podUID="9a0795c2-7ecf-4504-8071-c68e46a2784c" Jan 23 01:10:19.938888 sshd[5837]: Accepted publickey for core from 10.200.16.10 port 37752 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:10:19.939488 sshd-session[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:19.945287 systemd-logind[1676]: New session 24 of user core. Jan 23 01:10:19.951285 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:10:20.475827 sshd[5840]: Connection closed by 10.200.16.10 port 37752 Jan 23 01:10:20.476532 sshd-session[5837]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:20.480883 systemd-logind[1676]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:10:20.481738 systemd[1]: sshd@21-10.200.8.21:22-10.200.16.10:37752.service: Deactivated successfully. Jan 23 01:10:20.484412 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:10:20.489878 systemd-logind[1676]: Removed session 24. Jan 23 01:10:22.573955 kubelet[3167]: E0123 01:10:22.573901 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5db5b8969f-b7ffs" podUID="464a8745-942a-406e-a6f7-99a7e252e57c" Jan 23 01:10:25.578816 containerd[1689]: time="2026-01-23T01:10:25.577726684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:10:25.579250 kubelet[3167]: E0123 01:10:25.578105 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tdsjl" podUID="c928b9b3-da34-4326-8f7b-130857d457b5" Jan 23 01:10:25.579250 kubelet[3167]: E0123 01:10:25.578901 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cb5db5c6d-qkg5z" podUID="4dd5a4fb-a52c-4429-a2cb-1aa7fea80b6f" Jan 23 01:10:25.605455 systemd[1]: Started sshd@22-10.200.8.21:22-10.200.16.10:40306.service - OpenSSH per-connection server daemon (10.200.16.10:40306). Jan 23 01:10:25.828701 containerd[1689]: time="2026-01-23T01:10:25.828660186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:25.831272 containerd[1689]: time="2026-01-23T01:10:25.831057616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:10:25.831272 containerd[1689]: time="2026-01-23T01:10:25.831084072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:10:25.831562 kubelet[3167]: E0123 01:10:25.831230 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:10:25.831562 kubelet[3167]: E0123 01:10:25.831264 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:10:25.831562 kubelet[3167]: E0123 01:10:25.831359 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8f6bd7b4f-tth7c_calico-system(2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:25.832343 containerd[1689]: time="2026-01-23T01:10:25.832304309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:10:26.071840 containerd[1689]: time="2026-01-23T01:10:26.071795062Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:26.074220 containerd[1689]: time="2026-01-23T01:10:26.074175100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:10:26.074319 containerd[1689]: time="2026-01-23T01:10:26.074244777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:10:26.074416 kubelet[3167]: E0123 01:10:26.074369 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:10:26.074462 kubelet[3167]: E0123 01:10:26.074425 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:10:26.074522 kubelet[3167]: E0123 01:10:26.074505 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8f6bd7b4f-tth7c_calico-system(2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:26.074653 kubelet[3167]: E0123 01:10:26.074559 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f6bd7b4f-tth7c" podUID="2cc70e7d-4b4a-4947-a27a-3aa84d2bff8f" Jan 23 01:10:26.287231 sshd[5852]: Accepted publickey for core from 10.200.16.10 port 40306 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:10:26.288298 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:26.291995 systemd-logind[1676]: New session 25 of user core. Jan 23 01:10:26.297320 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:10:26.575473 kubelet[3167]: E0123 01:10:26.575357 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5k7j" podUID="757abb7b-5fcc-4c56-ba6f-f09ed789238a" Jan 23 01:10:26.856932 sshd[5855]: Connection closed by 10.200.16.10 port 40306 Jan 23 01:10:26.857471 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:26.861627 systemd-logind[1676]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:10:26.862060 systemd[1]: sshd@22-10.200.8.21:22-10.200.16.10:40306.service: Deactivated successfully. Jan 23 01:10:26.865499 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:10:26.869057 systemd-logind[1676]: Removed session 25. Jan 23 01:10:28.574779 containerd[1689]: time="2026-01-23T01:10:28.574663472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:10:28.813947 containerd[1689]: time="2026-01-23T01:10:28.813828590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:10:28.816362 containerd[1689]: time="2026-01-23T01:10:28.816321633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:10:28.816423 containerd[1689]: time="2026-01-23T01:10:28.816399923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:10:28.816609 kubelet[3167]: E0123 01:10:28.816575 3167 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:28.816876 kubelet[3167]: E0123 01:10:28.816613 3167 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:10:28.816876 kubelet[3167]: E0123 01:10:28.816726 3167 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b9b5df79c-pfx6f_calico-apiserver(8938b6cd-2993-4177-a47a-bf7c96438cfc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:10:28.816876 kubelet[3167]: E0123 01:10:28.816758 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9b5df79c-pfx6f" podUID="8938b6cd-2993-4177-a47a-bf7c96438cfc"