Nov 6 00:24:42.987927 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:24:42.987955 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:24:42.988005 kernel: BIOS-provided physical RAM map: Nov 6 00:24:42.988013 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 6 00:24:42.988020 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 6 00:24:42.988027 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Nov 6 00:24:42.988035 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Nov 6 00:24:42.988042 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Nov 6 00:24:42.988049 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Nov 6 00:24:42.988057 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 6 00:24:42.988064 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 6 00:24:42.988071 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 6 00:24:42.988077 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 6 00:24:42.988084 kernel: printk: legacy bootconsole [earlyser0] enabled Nov 6 00:24:42.988093 kernel: NX (Execute Disable) protection: active Nov 6 00:24:42.988102 kernel: APIC: Static calls initialized Nov 6 00:24:42.988109 kernel: efi: EFI v2.7 by Microsoft Nov 6 00:24:42.988116 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5718 RNG=0x3ffd2018 Nov 6 00:24:42.988123 kernel: random: crng init done Nov 6 00:24:42.988131 kernel: secureboot: Secure boot disabled Nov 6 00:24:42.988138 kernel: SMBIOS 3.1.0 present. Nov 6 00:24:42.988145 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Nov 6 00:24:42.988153 kernel: DMI: Memory slots populated: 2/2 Nov 6 00:24:42.988160 kernel: Hypervisor detected: Microsoft Hyper-V Nov 6 00:24:42.988167 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Nov 6 00:24:42.988175 kernel: Hyper-V: Nested features: 0x3e0101 Nov 6 00:24:42.988183 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 6 00:24:42.988190 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 6 00:24:42.988198 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 6 00:24:42.988205 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 6 00:24:42.988212 kernel: tsc: Detected 2299.999 MHz processor Nov 6 00:24:42.988220 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:24:42.988228 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:24:42.988236 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Nov 6 00:24:42.988244 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 6 00:24:42.988253 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:24:42.988261 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Nov 6 00:24:42.988268 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Nov 6 00:24:42.988275 kernel: Using GB pages for direct mapping Nov 6 00:24:42.988283 kernel: ACPI: Early table checksum verification disabled Nov 6 00:24:42.988293 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 6 00:24:42.988301 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:24:42.988311 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:24:42.988319 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 6 00:24:42.988326 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 6 00:24:42.988335 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:24:42.988342 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:24:42.988350 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:24:42.988358 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 6 00:24:42.988367 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Nov 6 00:24:42.988375 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 6 00:24:42.988383 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 6 00:24:42.988391 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Nov 6 00:24:42.988399 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 6 00:24:42.988407 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 6 00:24:42.988415 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 6 00:24:42.988423 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 6 00:24:42.988430 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Nov 6 00:24:42.988440 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Nov 6 00:24:42.988447 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 6 00:24:42.988455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 6 00:24:42.988463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Nov 6 00:24:42.988471 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Nov 6 00:24:42.988479 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Nov 6 00:24:42.988487 kernel: Zone ranges: Nov 6 00:24:42.988495 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:24:42.988503 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 6 00:24:42.988512 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 6 00:24:42.988519 kernel: Device empty Nov 6 00:24:42.988527 kernel: Movable zone start for each node Nov 6 00:24:42.988535 kernel: Early memory node ranges Nov 6 00:24:42.988543 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 6 00:24:42.988550 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Nov 6 00:24:42.988558 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Nov 6 00:24:42.988566 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 6 00:24:42.988574 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 6 00:24:42.988583 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 6 00:24:42.988591 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:24:42.988598 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 6 00:24:42.988606 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 6 00:24:42.988614 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Nov 6 00:24:42.988622 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 6 00:24:42.988630 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:24:42.988637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:24:42.988645 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:24:42.988654 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 6 00:24:42.988662 kernel: TSC deadline timer available Nov 6 00:24:42.988670 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:24:42.988678 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:24:42.988686 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:24:42.988694 kernel: CPU topo: Max. threads per core: 2 Nov 6 00:24:42.988701 kernel: CPU topo: Num. cores per package: 1 Nov 6 00:24:42.988709 kernel: CPU topo: Num. threads per package: 2 Nov 6 00:24:42.988717 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 6 00:24:42.988726 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 6 00:24:42.988734 kernel: Booting paravirtualized kernel on Hyper-V Nov 6 00:24:42.988742 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:24:42.988750 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 00:24:42.988758 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 6 00:24:42.988766 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 6 00:24:42.988774 kernel: pcpu-alloc: [0] 0 1 Nov 6 00:24:42.988781 kernel: Hyper-V: PV spinlocks enabled Nov 6 00:24:42.988789 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:24:42.988799 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:24:42.988808 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 6 00:24:42.988816 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:24:42.988823 kernel: Fallback order for Node 0: 0 Nov 6 00:24:42.988831 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Nov 6 00:24:42.988839 kernel: Policy zone: Normal Nov 6 00:24:42.988847 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:24:42.988855 kernel: software IO TLB: area num 2. Nov 6 00:24:42.988864 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 00:24:42.988872 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:24:42.988880 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:24:42.988887 kernel: Dynamic Preempt: voluntary Nov 6 00:24:42.988895 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:24:42.988904 kernel: rcu: RCU event tracing is enabled. Nov 6 00:24:42.988913 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 00:24:42.988928 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:24:42.988936 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:24:42.988945 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:24:42.988953 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:24:42.988962 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 00:24:42.988985 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:24:42.988994 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:24:42.989002 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:24:42.989011 kernel: Using NULL legacy PIC Nov 6 00:24:42.989019 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 6 00:24:42.989030 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:24:42.989038 kernel: Console: colour dummy device 80x25 Nov 6 00:24:42.989047 kernel: printk: legacy console [tty1] enabled Nov 6 00:24:42.989055 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:24:42.989064 kernel: printk: legacy bootconsole [earlyser0] disabled Nov 6 00:24:42.989073 kernel: ACPI: Core revision 20240827 Nov 6 00:24:42.989081 kernel: Failed to register legacy timer interrupt Nov 6 00:24:42.989090 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:24:42.989098 kernel: x2apic enabled Nov 6 00:24:42.989108 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:24:42.989116 kernel: Hyper-V: Host Build 10.0.26100.1414-1-0 Nov 6 00:24:42.989125 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 6 00:24:42.989133 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Nov 6 00:24:42.989142 kernel: Hyper-V: Using IPI hypercalls Nov 6 00:24:42.989150 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 6 00:24:42.989159 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 6 00:24:42.989167 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 6 00:24:42.989176 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 6 00:24:42.989186 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 6 00:24:42.989195 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 6 00:24:42.989203 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 6 00:24:42.989212 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Nov 6 00:24:42.989220 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:24:42.989229 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 6 00:24:42.989237 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 6 00:24:42.989246 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:24:42.989254 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:24:42.989263 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:24:42.989272 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 6 00:24:42.989281 kernel: RETBleed: Vulnerable Nov 6 00:24:42.989289 kernel: Speculative Store Bypass: Vulnerable Nov 6 00:24:42.989297 kernel: active return thunk: its_return_thunk Nov 6 00:24:42.989306 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 00:24:42.989314 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:24:42.989322 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:24:42.989331 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:24:42.989339 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 6 00:24:42.989347 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 6 00:24:42.989357 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 6 00:24:42.989366 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Nov 6 00:24:42.989374 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Nov 6 00:24:42.989382 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Nov 6 00:24:42.989390 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:24:42.989399 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 6 00:24:42.989407 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 6 00:24:42.989415 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 6 00:24:42.989424 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Nov 6 00:24:42.989432 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Nov 6 00:24:42.989441 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Nov 6 00:24:42.989451 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Nov 6 00:24:42.989460 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:24:42.989468 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:24:42.989477 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:24:42.989485 kernel: landlock: Up and running. Nov 6 00:24:42.989494 kernel: SELinux: Initializing. Nov 6 00:24:42.989503 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:24:42.989511 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:24:42.989520 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Nov 6 00:24:42.989529 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Nov 6 00:24:42.989537 kernel: signal: max sigframe size: 11952 Nov 6 00:24:42.989548 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:24:42.989557 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:24:42.989565 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:24:42.989574 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 00:24:42.989582 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:24:42.989590 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:24:42.989597 kernel: .... node #0, CPUs: #1 Nov 6 00:24:42.989605 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 00:24:42.989613 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 6 00:24:42.989621 kernel: Memory: 8070876K/8383228K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 306136K reserved, 0K cma-reserved) Nov 6 00:24:42.989630 kernel: devtmpfs: initialized Nov 6 00:24:42.989638 kernel: x86/mm: Memory block size: 128MB Nov 6 00:24:42.989645 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 6 00:24:42.989653 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:24:42.989661 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 00:24:42.989669 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:24:42.989677 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:24:42.989684 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:24:42.989694 kernel: audit: type=2000 audit(1762388680.029:1): state=initialized audit_enabled=0 res=1 Nov 6 00:24:42.989701 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:24:42.989709 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:24:42.989716 kernel: cpuidle: using governor menu Nov 6 00:24:42.989724 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:24:42.989732 kernel: dca service started, version 1.12.1 Nov 6 00:24:42.989740 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Nov 6 00:24:42.989747 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Nov 6 00:24:42.989755 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:24:42.989764 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:24:42.989771 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:24:42.989780 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:24:42.989788 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:24:42.989796 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:24:42.989805 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:24:42.989813 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:24:42.989821 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:24:42.989830 kernel: ACPI: Interpreter enabled Nov 6 00:24:42.989839 kernel: ACPI: PM: (supports S0 S5) Nov 6 00:24:42.989848 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:24:42.989857 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:24:42.989864 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 6 00:24:42.989872 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 6 00:24:42.989880 kernel: iommu: Default domain type: Translated Nov 6 00:24:42.989888 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:24:42.989895 kernel: efivars: Registered efivars operations Nov 6 00:24:42.989903 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:24:42.989912 kernel: PCI: System does not support PCI Nov 6 00:24:42.989919 kernel: vgaarb: loaded Nov 6 00:24:42.989927 kernel: clocksource: Switched to clocksource tsc-early Nov 6 00:24:42.989935 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:24:42.989942 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:24:42.989950 kernel: pnp: PnP ACPI init Nov 6 00:24:42.989958 kernel: pnp: PnP ACPI: found 3 devices Nov 6 00:24:42.989965 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:24:42.992831 kernel: NET: Registered PF_INET protocol family Nov 6 00:24:42.992844 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 00:24:42.992852 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 6 00:24:42.992861 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:24:42.992868 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:24:42.992876 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 6 00:24:42.992884 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 6 00:24:42.992891 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 00:24:42.992899 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 00:24:42.992906 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:24:42.992916 kernel: NET: Registered PF_XDP protocol family Nov 6 00:24:42.992923 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:24:42.992930 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 6 00:24:42.992938 kernel: software IO TLB: mapped [mem 0x000000003a9c3000-0x000000003e9c3000] (64MB) Nov 6 00:24:42.992945 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Nov 6 00:24:42.992953 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Nov 6 00:24:42.992960 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Nov 6 00:24:42.992983 kernel: clocksource: Switched to clocksource tsc Nov 6 00:24:42.992992 kernel: Initialise system trusted keyrings Nov 6 00:24:42.993001 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 6 00:24:42.993008 kernel: Key type asymmetric registered Nov 6 00:24:42.993015 kernel: Asymmetric key parser 'x509' registered Nov 6 00:24:42.993023 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:24:42.993030 kernel: io scheduler mq-deadline registered Nov 6 00:24:42.993037 kernel: io scheduler kyber registered Nov 6 00:24:42.993045 kernel: io scheduler bfq registered Nov 6 00:24:42.993052 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:24:42.993059 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:24:42.993068 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:24:42.993075 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 6 00:24:42.993083 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:24:42.993090 kernel: i8042: PNP: No PS/2 controller found. Nov 6 00:24:42.993202 kernel: rtc_cmos 00:02: registered as rtc0 Nov 6 00:24:42.993266 kernel: rtc_cmos 00:02: setting system clock to 2025-11-06T00:24:42 UTC (1762388682) Nov 6 00:24:42.993326 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 6 00:24:42.993336 kernel: intel_pstate: Intel P-state driver initializing Nov 6 00:24:42.993344 kernel: efifb: probing for efifb Nov 6 00:24:42.993351 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 6 00:24:42.993358 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 6 00:24:42.993366 kernel: efifb: scrolling: redraw Nov 6 00:24:42.993373 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 6 00:24:42.993380 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 00:24:42.993388 kernel: fb0: EFI VGA frame buffer device Nov 6 00:24:42.993395 kernel: pstore: Using crash dump compression: deflate Nov 6 00:24:42.993404 kernel: pstore: Registered efi_pstore as persistent store backend Nov 6 00:24:42.993411 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:24:42.993418 kernel: Segment Routing with IPv6 Nov 6 00:24:42.993425 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:24:42.993433 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:24:42.993440 kernel: Key type dns_resolver registered Nov 6 00:24:42.993447 kernel: IPI shorthand broadcast: enabled Nov 6 00:24:42.993454 kernel: sched_clock: Marking stable (2782231548, 108168710)->(3195602117, -305201859) Nov 6 00:24:42.993462 kernel: registered taskstats version 1 Nov 6 00:24:42.993469 kernel: Loading compiled-in X.509 certificates Nov 6 00:24:42.993478 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:24:42.993485 kernel: Demotion targets for Node 0: null Nov 6 00:24:42.993492 kernel: Key type .fscrypt registered Nov 6 00:24:42.993499 kernel: Key type fscrypt-provisioning registered Nov 6 00:24:42.993507 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:24:42.993514 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:24:42.993521 kernel: ima: No architecture policies found Nov 6 00:24:42.993528 kernel: clk: Disabling unused clocks Nov 6 00:24:42.993536 kernel: Warning: unable to open an initial console. Nov 6 00:24:42.993545 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:24:42.993552 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:24:42.993559 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:24:42.993567 kernel: Run /init as init process Nov 6 00:24:42.993574 kernel: with arguments: Nov 6 00:24:42.993581 kernel: /init Nov 6 00:24:42.993588 kernel: with environment: Nov 6 00:24:42.993595 kernel: HOME=/ Nov 6 00:24:42.993602 kernel: TERM=linux Nov 6 00:24:42.993612 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:24:42.993622 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:24:42.993631 systemd[1]: Detected virtualization microsoft. Nov 6 00:24:42.993638 systemd[1]: Detected architecture x86-64. Nov 6 00:24:42.993646 systemd[1]: Running in initrd. Nov 6 00:24:42.993653 systemd[1]: No hostname configured, using default hostname. Nov 6 00:24:42.993661 systemd[1]: Hostname set to . Nov 6 00:24:42.993670 systemd[1]: Initializing machine ID from random generator. Nov 6 00:24:42.993678 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:24:42.993686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:24:42.993694 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:24:42.993703 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:24:42.993711 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:24:42.993718 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:24:42.993728 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:24:42.993737 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:24:42.993745 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:24:42.993753 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:24:42.993760 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:24:42.993768 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:24:42.993776 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:24:42.993784 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:24:42.993793 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:24:42.993801 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:24:42.993809 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:24:42.993817 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:24:42.993825 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:24:42.993832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:24:42.993840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:24:42.993848 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:24:42.993856 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:24:42.993865 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:24:42.993872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:24:42.993880 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:24:42.993888 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:24:42.993896 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:24:42.993904 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:24:42.993911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:24:42.993919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:24:42.993936 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:24:42.993946 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:24:42.993982 systemd-journald[185]: Collecting audit messages is disabled. Nov 6 00:24:42.994003 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:24:42.994011 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:24:42.994020 systemd-journald[185]: Journal started Nov 6 00:24:42.994042 systemd-journald[185]: Runtime Journal (/run/log/journal/9792407f8fcd4a3593b24e60c7bba04d) is 8M, max 158.6M, 150.6M free. Nov 6 00:24:42.980210 systemd-modules-load[187]: Inserted module 'overlay' Nov 6 00:24:43.001007 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:24:43.008934 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:24:43.008979 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:24:43.014028 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:24:43.014066 kernel: Bridge firewalling registered Nov 6 00:24:43.014462 systemd-modules-load[187]: Inserted module 'br_netfilter' Nov 6 00:24:43.015964 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:24:43.018807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:24:43.021043 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:24:43.036221 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:24:43.041739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:24:43.043357 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:24:43.047705 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:24:43.049784 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:24:43.051477 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:24:43.060321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:24:43.069014 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:24:43.072327 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:24:43.082590 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:24:43.105342 systemd-resolved[227]: Positive Trust Anchors: Nov 6 00:24:43.106851 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:24:43.109942 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:24:43.112878 systemd-resolved[227]: Defaulting to hostname 'linux'. Nov 6 00:24:43.128032 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:24:43.132100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:24:43.158988 kernel: SCSI subsystem initialized Nov 6 00:24:43.165983 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:24:43.174989 kernel: iscsi: registered transport (tcp) Nov 6 00:24:43.191077 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:24:43.191113 kernel: QLogic iSCSI HBA Driver Nov 6 00:24:43.202720 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:24:43.215139 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:24:43.220462 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:24:43.247350 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:24:43.253011 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:24:43.296990 kernel: raid6: avx512x4 gen() 46164 MB/s Nov 6 00:24:43.314980 kernel: raid6: avx512x2 gen() 45212 MB/s Nov 6 00:24:43.332984 kernel: raid6: avx512x1 gen() 27275 MB/s Nov 6 00:24:43.350981 kernel: raid6: avx2x4 gen() 38520 MB/s Nov 6 00:24:43.367979 kernel: raid6: avx2x2 gen() 42021 MB/s Nov 6 00:24:43.385516 kernel: raid6: avx2x1 gen() 32532 MB/s Nov 6 00:24:43.385541 kernel: raid6: using algorithm avx512x4 gen() 46164 MB/s Nov 6 00:24:43.404173 kernel: raid6: .... xor() 7809 MB/s, rmw enabled Nov 6 00:24:43.404256 kernel: raid6: using avx512x2 recovery algorithm Nov 6 00:24:43.420987 kernel: xor: automatically using best checksumming function avx Nov 6 00:24:43.529986 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:24:43.534538 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:24:43.537092 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:24:43.560233 systemd-udevd[435]: Using default interface naming scheme 'v255'. Nov 6 00:24:43.564932 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:24:43.571213 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:24:43.593963 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Nov 6 00:24:43.611242 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:24:43.614097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:24:43.641988 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:24:43.648881 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:24:43.686000 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:24:43.708994 kernel: AES CTR mode by8 optimization enabled Nov 6 00:24:43.713990 kernel: hv_vmbus: Vmbus version:5.3 Nov 6 00:24:43.720498 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:24:43.720551 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:24:43.723266 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:24:43.733100 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:24:43.747537 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 6 00:24:43.747575 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 6 00:24:43.751989 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 6 00:24:43.759126 kernel: hv_vmbus: registering driver hv_netvsc Nov 6 00:24:43.766314 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 6 00:24:43.768993 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 6 00:24:43.786420 kernel: hv_vmbus: registering driver hv_pci Nov 6 00:24:43.786463 kernel: PTP clock support registered Nov 6 00:24:43.785147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:24:43.796142 kernel: hv_vmbus: registering driver hv_storvsc Nov 6 00:24:43.796161 kernel: hv_vmbus: registering driver hid_hyperv Nov 6 00:24:43.800215 kernel: scsi host0: storvsc_host_t Nov 6 00:24:43.800740 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 6 00:24:43.804398 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 6 00:24:43.815542 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Nov 6 00:24:43.815878 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d72f09f (unnamed net_device) (uninitialized): VF slot 1 added Nov 6 00:24:43.817402 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 6 00:24:43.817581 kernel: hv_utils: Registering HyperV Utility Driver Nov 6 00:24:43.818289 kernel: hv_vmbus: registering driver hv_utils Nov 6 00:24:43.825009 kernel: hv_utils: Heartbeat IC version 3.0 Nov 6 00:24:43.825154 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Nov 6 00:24:43.828284 kernel: hv_utils: TimeSync IC version 4.0 Nov 6 00:24:43.828372 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Nov 6 00:24:43.828844 kernel: hv_utils: Shutdown IC version 3.2 Nov 6 00:24:43.828860 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Nov 6 00:24:43.886020 systemd-resolved[227]: Clock change detected. Flushing caches. Nov 6 00:24:43.898891 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Nov 6 00:24:43.898933 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 6 00:24:43.899048 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:24:43.901145 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Nov 6 00:24:43.901894 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 6 00:24:43.917568 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Nov 6 00:24:43.917742 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Nov 6 00:24:43.932553 kernel: nvme nvme0: pci function c05b:00:00.0 Nov 6 00:24:43.932742 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Nov 6 00:24:43.937028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#123 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 00:24:43.954900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 00:24:44.095988 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 6 00:24:44.109895 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:24:44.378894 kernel: nvme nvme0: using unchecked data buffer Nov 6 00:24:44.582997 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Nov 6 00:24:44.598614 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 6 00:24:44.599134 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Nov 6 00:24:44.608624 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Nov 6 00:24:44.612150 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:24:44.641769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 6 00:24:44.788258 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:24:44.791586 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:24:44.792620 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:24:44.792641 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:24:44.793973 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:24:44.810065 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:24:44.885597 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Nov 6 00:24:44.885761 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Nov 6 00:24:44.888491 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Nov 6 00:24:44.889964 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Nov 6 00:24:44.895065 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Nov 6 00:24:44.898907 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Nov 6 00:24:44.903987 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Nov 6 00:24:44.904015 kernel: pci 7870:00:00.0: enabling Extended Tags Nov 6 00:24:44.921079 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Nov 6 00:24:44.921225 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Nov 6 00:24:44.924128 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Nov 6 00:24:44.927735 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Nov 6 00:24:44.937903 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Nov 6 00:24:44.940617 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d72f09f eth0: VF registering: eth1 Nov 6 00:24:44.940761 kernel: mana 7870:00:00.0 eth1: joined to eth0 Nov 6 00:24:44.944904 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Nov 6 00:24:45.638031 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:24:45.638628 disk-uuid[643]: The operation has completed successfully. Nov 6 00:24:45.710036 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:24:45.710119 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:24:45.726808 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:24:45.740003 sh[692]: Success Nov 6 00:24:45.769303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:24:45.769341 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:24:45.770280 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:24:45.777901 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 6 00:24:46.040315 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:24:46.045971 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:24:46.057748 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:24:46.071914 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (705) Nov 6 00:24:46.073895 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:24:46.073924 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:24:46.453037 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 00:24:46.453123 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:24:46.454241 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:24:46.486759 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:24:46.491291 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:24:46.492622 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:24:46.493976 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:24:46.499563 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:24:46.519923 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (728) Nov 6 00:24:46.524590 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:24:46.524630 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:24:46.570560 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:24:46.575986 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:24:46.580051 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:24:46.580067 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 6 00:24:46.580077 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:24:46.583903 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:24:46.584536 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:24:46.590275 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:24:46.609032 systemd-networkd[870]: lo: Link UP Nov 6 00:24:46.609039 systemd-networkd[870]: lo: Gained carrier Nov 6 00:24:46.610385 systemd-networkd[870]: Enumeration completed Nov 6 00:24:46.613949 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 6 00:24:46.610954 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:24:46.612263 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:24:46.612267 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:24:46.617895 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 6 00:24:46.620894 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d72f09f eth0: Data path switched to VF: enP30832s1 Nov 6 00:24:46.621375 systemd-networkd[870]: enP30832s1: Link UP Nov 6 00:24:46.621441 systemd-networkd[870]: eth0: Link UP Nov 6 00:24:46.621574 systemd-networkd[870]: eth0: Gained carrier Nov 6 00:24:46.621585 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:24:46.624758 systemd[1]: Reached target network.target - Network. Nov 6 00:24:46.628188 systemd-networkd[870]: enP30832s1: Gained carrier Nov 6 00:24:46.640913 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 00:24:47.635670 ignition[875]: Ignition 2.22.0 Nov 6 00:24:47.635683 ignition[875]: Stage: fetch-offline Nov 6 00:24:47.635795 ignition[875]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:47.635802 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:24:47.635933 ignition[875]: parsed url from cmdline: "" Nov 6 00:24:47.635936 ignition[875]: no config URL provided Nov 6 00:24:47.635941 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:24:47.640958 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:24:47.635946 ignition[875]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:24:47.646805 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 00:24:47.635951 ignition[875]: failed to fetch config: resource requires networking Nov 6 00:24:47.639111 ignition[875]: Ignition finished successfully Nov 6 00:24:47.678603 ignition[886]: Ignition 2.22.0 Nov 6 00:24:47.678613 ignition[886]: Stage: fetch Nov 6 00:24:47.678799 ignition[886]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:47.678806 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:24:47.678875 ignition[886]: parsed url from cmdline: "" Nov 6 00:24:47.678878 ignition[886]: no config URL provided Nov 6 00:24:47.678897 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:24:47.678902 ignition[886]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:24:47.678920 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 6 00:24:47.745056 ignition[886]: GET result: OK Nov 6 00:24:47.745116 ignition[886]: config has been read from IMDS userdata Nov 6 00:24:47.745141 ignition[886]: parsing config with SHA512: 5cd2b7c2122824ba3ca0068a26e26282ef9ca92bd13cb4773d75a4ba513b5bd2c77b822d13264513665c471e376821e57744fb14daf8e76f30ab02cfebf074c6 Nov 6 00:24:47.748424 unknown[886]: fetched base config from "system" Nov 6 00:24:47.748432 unknown[886]: fetched base config from "system" Nov 6 00:24:47.748742 ignition[886]: fetch: fetch complete Nov 6 00:24:47.748437 unknown[886]: fetched user config from "azure" Nov 6 00:24:47.748747 ignition[886]: fetch: fetch passed Nov 6 00:24:47.750662 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 00:24:47.748780 ignition[886]: Ignition finished successfully Nov 6 00:24:47.760991 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:24:47.792141 ignition[892]: Ignition 2.22.0 Nov 6 00:24:47.792150 ignition[892]: Stage: kargs Nov 6 00:24:47.792319 ignition[892]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:47.792328 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:24:47.795409 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:24:47.793221 ignition[892]: kargs: kargs passed Nov 6 00:24:47.800210 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:24:47.793257 ignition[892]: Ignition finished successfully Nov 6 00:24:47.808964 systemd-networkd[870]: eth0: Gained IPv6LL Nov 6 00:24:47.821402 ignition[899]: Ignition 2.22.0 Nov 6 00:24:47.821412 ignition[899]: Stage: disks Nov 6 00:24:47.821605 ignition[899]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:47.824474 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:24:47.821612 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:24:47.827627 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:24:47.822435 ignition[899]: disks: disks passed Nov 6 00:24:47.829458 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:24:47.822464 ignition[899]: Ignition finished successfully Nov 6 00:24:47.836118 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:24:47.840759 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:24:47.843164 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:24:47.848323 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:24:47.916346 systemd-fsck[908]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Nov 6 00:24:47.920311 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:24:47.924785 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:24:50.012897 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:24:50.013436 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:24:50.015270 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:24:50.045795 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:24:50.064960 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:24:50.069770 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 6 00:24:50.071419 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:24:50.071449 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:24:50.082451 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:24:50.084396 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:24:50.094385 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (917) Nov 6 00:24:50.094415 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:24:50.094438 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:24:50.100921 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:24:50.100964 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 6 00:24:50.102373 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:24:50.103375 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:24:50.525429 coreos-metadata[919]: Nov 06 00:24:50.525 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 6 00:24:50.530159 coreos-metadata[919]: Nov 06 00:24:50.530 INFO Fetch successful Nov 6 00:24:50.531384 coreos-metadata[919]: Nov 06 00:24:50.530 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 6 00:24:50.539268 coreos-metadata[919]: Nov 06 00:24:50.539 INFO Fetch successful Nov 6 00:24:50.555288 coreos-metadata[919]: Nov 06 00:24:50.555 INFO wrote hostname ci-4459.1.0-n-3bced53249 to /sysroot/etc/hostname Nov 6 00:24:50.558610 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:24:50.827286 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:24:50.859685 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:24:50.890808 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:24:50.908527 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:24:52.051969 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:24:52.056354 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:24:52.059976 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:24:52.071278 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:24:52.074910 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:24:52.091782 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:24:52.103733 ignition[1037]: INFO : Ignition 2.22.0 Nov 6 00:24:52.105192 ignition[1037]: INFO : Stage: mount Nov 6 00:24:52.105192 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:52.105192 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:24:52.105192 ignition[1037]: INFO : mount: mount passed Nov 6 00:24:52.105192 ignition[1037]: INFO : Ignition finished successfully Nov 6 00:24:52.109482 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:24:52.113195 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:24:52.126962 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:24:52.144892 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1047) Nov 6 00:24:52.147156 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:24:52.147192 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:24:52.152035 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:24:52.152075 kernel: BTRFS info (device nvme0n1p6): turning on async discard Nov 6 00:24:52.153360 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:24:52.154614 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:24:52.176935 ignition[1064]: INFO : Ignition 2.22.0 Nov 6 00:24:52.176935 ignition[1064]: INFO : Stage: files Nov 6 00:24:52.180935 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:52.180935 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:24:52.180935 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:24:52.192299 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:24:52.192299 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:24:52.247026 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:24:52.249181 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:24:52.249181 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:24:52.247339 unknown[1064]: wrote ssh authorized keys file for user: core Nov 6 00:24:52.330282 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:24:52.333949 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:24:52.366429 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:24:52.459375 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:24:52.463986 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:24:52.463986 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:24:52.463986 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:24:52.463986 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:24:52.463986 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:24:52.463986 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:24:52.463986 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:24:52.463986 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:24:52.486325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:24:52.486325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:24:52.486325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:24:52.486325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:24:52.486325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:24:52.486325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 6 00:24:52.807219 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:24:53.835438 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 6 00:24:53.835438 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 00:24:53.879803 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:24:53.888578 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:24:53.888578 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 00:24:53.888578 ignition[1064]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:24:53.901007 ignition[1064]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:24:53.901007 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:24:53.901007 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:24:53.901007 ignition[1064]: INFO : files: files passed Nov 6 00:24:53.901007 ignition[1064]: INFO : Ignition finished successfully Nov 6 00:24:53.894662 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:24:53.899522 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:24:53.911447 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:24:53.919756 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:24:53.939957 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:24:53.939957 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:24:53.919963 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:24:53.948993 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:24:53.932023 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:24:53.935271 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:24:53.939990 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:24:53.976326 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:24:53.976413 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:24:53.981063 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:24:53.983993 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:24:53.985464 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:24:53.986982 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:24:54.001116 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:24:54.005301 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:24:54.021713 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:24:54.022340 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:24:54.022486 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:24:54.022750 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:24:54.022840 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:24:54.028422 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:24:54.033051 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:24:54.035096 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:24:54.039023 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:24:54.042036 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:24:54.046016 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:24:54.050027 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:24:54.053505 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:24:54.058054 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:24:54.060589 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:24:54.065325 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:24:54.068711 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:24:54.070610 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:24:54.082217 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:24:54.085661 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:24:54.088801 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:24:54.089792 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:24:54.093002 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:24:54.093117 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:24:54.097231 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:24:54.097368 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:24:54.111053 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:24:54.111167 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:24:54.115063 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 6 00:24:54.115186 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:24:54.121771 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:24:54.125386 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:24:54.125540 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:24:54.135115 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:24:54.144949 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:24:54.145081 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:24:54.152396 ignition[1118]: INFO : Ignition 2.22.0 Nov 6 00:24:54.152396 ignition[1118]: INFO : Stage: umount Nov 6 00:24:54.152396 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:24:54.152396 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 6 00:24:54.163977 ignition[1118]: INFO : umount: umount passed Nov 6 00:24:54.163977 ignition[1118]: INFO : Ignition finished successfully Nov 6 00:24:54.155467 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:24:54.155564 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:24:54.167148 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:24:54.167234 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:24:54.171688 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:24:54.172909 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:24:54.178016 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:24:54.178057 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:24:54.202951 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 00:24:54.202992 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 00:24:54.206940 systemd[1]: Stopped target network.target - Network. Nov 6 00:24:54.209662 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:24:54.209705 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:24:54.212944 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:24:54.216921 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:24:54.219920 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:24:54.223651 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:24:54.225029 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:24:54.226837 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:24:54.226871 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:24:54.229076 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:24:54.229102 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:24:54.231474 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:24:54.231508 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:24:54.238968 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:24:54.239007 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:24:54.242548 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:24:54.245957 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:24:54.248326 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:24:54.248820 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:24:54.248913 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:24:54.255830 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:24:54.256392 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:24:54.262804 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:24:54.263000 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:24:54.263080 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:24:54.272735 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:24:54.272945 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:24:54.273015 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:24:54.292718 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:24:54.297001 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:24:54.297032 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:24:54.299949 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:24:54.299998 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:24:54.304492 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:24:54.307531 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:24:54.307585 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:24:54.312813 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:24:54.312858 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:24:54.315542 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:24:54.316673 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:24:54.330422 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:24:54.330473 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:24:54.336956 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:24:54.339847 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:24:54.339920 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:24:54.353926 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d72f09f eth0: Data path switched from VF: enP30832s1 Nov 6 00:24:54.354519 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 6 00:24:54.356389 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:24:54.357639 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:24:54.360331 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:24:54.361535 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:24:54.366995 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:24:54.367043 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:24:54.368538 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:24:54.368563 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:24:54.372161 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:24:54.372198 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:24:54.375196 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:24:54.375232 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:24:54.376442 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:24:54.376474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:24:54.377952 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:24:54.378086 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:24:54.378124 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:24:54.385072 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:24:54.385144 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:24:54.390933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:24:54.390978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:24:54.397821 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 00:24:54.397853 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 00:24:54.397876 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:24:54.398096 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:24:54.398148 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:24:54.400271 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:24:54.433376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:24:54.457107 systemd[1]: Switching root. Nov 6 00:24:54.584202 systemd-journald[185]: Journal stopped Nov 6 00:25:02.141540 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Nov 6 00:25:02.141565 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:25:02.141578 kernel: SELinux: policy capability open_perms=1 Nov 6 00:25:02.141586 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:25:02.141593 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:25:02.141600 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:25:02.141608 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:25:02.141616 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:25:02.141625 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:25:02.141632 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:25:02.141639 kernel: audit: type=1403 audit(1762388695.674:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:25:02.141648 systemd[1]: Successfully loaded SELinux policy in 172.150ms. Nov 6 00:25:02.141657 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.961ms. Nov 6 00:25:02.141667 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:25:02.141680 systemd[1]: Detected virtualization microsoft. Nov 6 00:25:02.141688 systemd[1]: Detected architecture x86-64. Nov 6 00:25:02.141696 systemd[1]: Detected first boot. Nov 6 00:25:02.141704 systemd[1]: Hostname set to . Nov 6 00:25:02.141712 systemd[1]: Initializing machine ID from random generator. Nov 6 00:25:02.141720 zram_generator::config[1162]: No configuration found. Nov 6 00:25:02.141731 kernel: Guest personality initialized and is inactive Nov 6 00:25:02.141738 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Nov 6 00:25:02.141745 kernel: Initialized host personality Nov 6 00:25:02.141753 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:25:02.141761 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:25:02.141770 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:25:02.141778 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:25:02.141787 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:25:02.141795 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:25:02.141804 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:25:02.141812 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:25:02.141820 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:25:02.141828 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:25:02.141837 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:25:02.145948 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:25:02.145987 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:25:02.145999 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:25:02.146011 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:25:02.146023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:25:02.146035 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:25:02.146050 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:25:02.146063 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:25:02.146075 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:25:02.146088 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:25:02.146100 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:25:02.146112 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:25:02.146124 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:25:02.146135 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:25:02.146147 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:25:02.146159 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:25:02.146172 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:25:02.146184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:25:02.146196 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:25:02.146207 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:25:02.146222 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:25:02.146234 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:25:02.146248 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:25:02.146260 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:25:02.146272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:25:02.146283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:25:02.146295 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:25:02.146306 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:25:02.146317 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:25:02.146330 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:25:02.146343 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:02.146360 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:25:02.146370 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:25:02.146383 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:25:02.146397 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:25:02.146408 systemd[1]: Reached target machines.target - Containers. Nov 6 00:25:02.146419 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:25:02.146432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:25:02.146444 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:25:02.146455 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:25:02.146466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:25:02.146479 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:25:02.146490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:25:02.146501 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:25:02.146512 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:25:02.146524 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:25:02.146536 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:25:02.146548 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:25:02.146559 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:25:02.146570 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:25:02.146581 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:25:02.146593 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:25:02.146603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:25:02.146615 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:25:02.146628 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:25:02.146664 systemd-journald[1245]: Collecting audit messages is disabled. Nov 6 00:25:02.146689 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:25:02.146701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:25:02.146714 kernel: loop: module loaded Nov 6 00:25:02.146724 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:25:02.146736 systemd[1]: Stopped verity-setup.service. Nov 6 00:25:02.146748 systemd-journald[1245]: Journal started Nov 6 00:25:02.146773 systemd-journald[1245]: Runtime Journal (/run/log/journal/eab9ab15b2ce46a79927ff3480fa523d) is 8M, max 158.6M, 150.6M free. Nov 6 00:25:01.678793 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:25:01.683346 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 6 00:25:01.683660 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:25:02.154778 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:02.160187 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:25:02.164334 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:25:02.165667 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:25:02.171604 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:25:02.174794 kernel: fuse: init (API version 7.41) Nov 6 00:25:02.176062 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:25:02.179013 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:25:02.180409 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:25:02.184159 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:25:02.185540 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:25:02.188190 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:25:02.188344 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:25:02.191108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:25:02.191238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:25:02.194161 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:25:02.194313 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:25:02.197154 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:25:02.197324 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:25:02.200205 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:25:02.200338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:25:02.203113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:25:02.206180 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:25:02.209219 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:25:02.217959 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:25:02.221565 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:25:02.235911 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:25:02.237425 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:25:02.237503 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:25:02.240729 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:25:02.248997 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:25:02.281488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:25:02.310113 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:25:02.314996 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:25:02.316429 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:25:02.319000 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:25:02.321459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:25:02.322307 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:25:02.326996 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:25:02.330943 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:25:02.336932 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:25:02.339960 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:25:02.342385 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:25:02.345116 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:25:02.363776 systemd-journald[1245]: Time spent on flushing to /var/log/journal/eab9ab15b2ce46a79927ff3480fa523d is 20.718ms for 982 entries. Nov 6 00:25:02.363776 systemd-journald[1245]: System Journal (/var/log/journal/eab9ab15b2ce46a79927ff3480fa523d) is 8M, max 2.6G, 2.6G free. Nov 6 00:25:02.489011 systemd-journald[1245]: Received client request to flush runtime journal. Nov 6 00:25:02.489051 kernel: ACPI: bus type drm_connector registered Nov 6 00:25:02.489068 kernel: loop0: detected capacity change from 0 to 128016 Nov 6 00:25:02.373809 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:25:02.373951 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:25:02.447948 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:25:02.450421 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:25:02.453983 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:25:02.469152 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:25:02.490587 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:25:02.557838 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:25:02.684956 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:25:02.984910 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:25:03.016903 kernel: loop1: detected capacity change from 0 to 219144 Nov 6 00:25:03.072898 kernel: loop2: detected capacity change from 0 to 27936 Nov 6 00:25:03.094696 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:25:03.098971 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:25:03.178808 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:25:03.379056 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Nov 6 00:25:03.379071 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Nov 6 00:25:03.381622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:25:03.384707 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:25:03.410055 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Nov 6 00:25:03.609903 kernel: loop3: detected capacity change from 0 to 110984 Nov 6 00:25:04.096918 kernel: loop4: detected capacity change from 0 to 128016 Nov 6 00:25:04.107908 kernel: loop5: detected capacity change from 0 to 219144 Nov 6 00:25:04.125896 kernel: loop6: detected capacity change from 0 to 27936 Nov 6 00:25:04.143899 kernel: loop7: detected capacity change from 0 to 110984 Nov 6 00:25:04.155232 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 6 00:25:04.155597 (sd-merge)[1328]: Merged extensions into '/usr'. Nov 6 00:25:04.158796 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:25:04.158808 systemd[1]: Reloading... Nov 6 00:25:04.237090 zram_generator::config[1376]: No configuration found. Nov 6 00:25:04.430905 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:25:04.441903 kernel: hv_vmbus: registering driver hyperv_fb Nov 6 00:25:04.464904 kernel: hv_vmbus: registering driver hv_balloon Nov 6 00:25:04.474899 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 6 00:25:04.474945 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 6 00:25:04.476902 kernel: Console: switching to colour dummy device 80x25 Nov 6 00:25:04.481291 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 00:25:04.513206 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 6 00:25:04.539457 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#74 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 6 00:25:04.640440 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:25:04.640988 systemd[1]: Reloading finished in 481 ms. Nov 6 00:25:04.676155 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:25:04.678599 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:25:04.717836 systemd[1]: Starting ensure-sysext.service... Nov 6 00:25:04.722124 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:25:04.726087 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:25:04.734195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:25:04.756967 systemd[1]: Reload requested from client PID 1485 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:25:04.756986 systemd[1]: Reloading... Nov 6 00:25:04.762959 systemd-tmpfiles[1489]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:25:04.763218 systemd-tmpfiles[1489]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:25:04.763496 systemd-tmpfiles[1489]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:25:04.764155 systemd-tmpfiles[1489]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:25:04.766871 systemd-tmpfiles[1489]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:25:04.767211 systemd-tmpfiles[1489]: ACLs are not supported, ignoring. Nov 6 00:25:04.767290 systemd-tmpfiles[1489]: ACLs are not supported, ignoring. Nov 6 00:25:04.789022 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 6 00:25:04.814951 systemd-tmpfiles[1489]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:25:04.815039 systemd-tmpfiles[1489]: Skipping /boot Nov 6 00:25:04.822782 systemd-tmpfiles[1489]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:25:04.823026 systemd-tmpfiles[1489]: Skipping /boot Nov 6 00:25:04.829986 zram_generator::config[1525]: No configuration found. Nov 6 00:25:04.994686 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Nov 6 00:25:04.995604 systemd[1]: Reloading finished in 238 ms. Nov 6 00:25:05.020064 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:25:05.036152 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:05.037128 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:25:05.065993 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:25:05.068359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:25:05.075376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:25:05.077527 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:25:05.079560 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:25:05.080114 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:25:05.081757 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:25:05.082911 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:25:05.085307 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:25:05.088556 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:25:05.091966 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:25:05.100171 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:25:05.101343 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:05.105402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:25:05.105940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:25:05.110955 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:25:05.125876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:25:05.126065 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:25:05.128347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:25:05.128477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:25:05.130797 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:25:05.130978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:25:05.136556 systemd[1]: Finished ensure-sysext.service. Nov 6 00:25:05.141791 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:05.141979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:25:05.143769 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:25:05.146012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:25:05.146048 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:25:05.146092 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:25:05.146130 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:25:05.146160 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:25:05.153568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:25:05.156284 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:25:05.156628 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:25:05.156777 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:25:05.168087 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:25:05.172662 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:25:05.204109 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:25:05.267719 systemd-resolved[1589]: Positive Trust Anchors: Nov 6 00:25:05.267732 systemd-resolved[1589]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:25:05.267762 systemd-resolved[1589]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:25:05.287705 systemd-networkd[1488]: lo: Link UP Nov 6 00:25:05.287711 systemd-networkd[1488]: lo: Gained carrier Nov 6 00:25:05.288675 systemd-networkd[1488]: Enumeration completed Nov 6 00:25:05.291904 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Nov 6 00:25:05.288827 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:25:05.289001 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:25:05.289005 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:25:05.292464 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:25:05.296995 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:25:05.342701 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Nov 6 00:25:05.342874 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d72f09f eth0: Data path switched to VF: enP30832s1 Nov 6 00:25:05.299137 systemd-resolved[1589]: Using system hostname 'ci-4459.1.0-n-3bced53249'. Nov 6 00:25:05.303924 systemd-networkd[1488]: enP30832s1: Link UP Nov 6 00:25:05.304006 systemd-networkd[1488]: eth0: Link UP Nov 6 00:25:05.304009 systemd-networkd[1488]: eth0: Gained carrier Nov 6 00:25:05.304028 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:25:05.308132 systemd-networkd[1488]: enP30832s1: Gained carrier Nov 6 00:25:05.317583 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:25:05.318127 systemd[1]: Reached target network.target - Network. Nov 6 00:25:05.318387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:25:05.320999 systemd-networkd[1488]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 00:25:05.339789 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:25:05.392855 augenrules[1634]: No rules Nov 6 00:25:05.393665 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:25:05.393855 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:25:05.441276 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:25:06.375597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:25:06.497033 systemd-networkd[1488]: eth0: Gained IPv6LL Nov 6 00:25:06.499079 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:25:06.502120 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:25:07.823065 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:25:07.825102 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:25:10.669380 ldconfig[1296]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:25:10.678981 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:25:10.681633 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:25:10.704433 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:25:10.708140 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:25:10.709512 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:25:10.711412 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:25:10.714924 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:25:10.716604 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:25:10.719997 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:25:10.723934 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:25:10.727927 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:25:10.727957 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:25:10.738198 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:25:10.771875 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:25:10.774443 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:25:10.778347 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:25:10.781047 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:25:10.783928 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:25:10.788203 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:25:10.791152 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:25:10.794365 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:25:10.796060 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:25:10.798944 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:25:10.799907 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:25:10.799926 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:25:10.830830 systemd[1]: Starting chronyd.service - NTP client/server... Nov 6 00:25:10.851442 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:25:10.857618 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 00:25:10.862377 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:25:10.868386 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:25:10.871249 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:25:10.875052 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:25:10.879149 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:25:10.885002 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:25:10.885637 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Nov 6 00:25:10.896006 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 6 00:25:10.898651 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 6 00:25:10.899973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:10.908502 jq[1659]: false Nov 6 00:25:10.910118 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:25:10.913749 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:25:10.920577 KVP[1662]: KVP starting; pid is:1662 Nov 6 00:25:10.923486 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:25:10.924410 KVP[1662]: KVP LIC Version: 3.1 Nov 6 00:25:10.924902 kernel: hv_utils: KVP IC version 4.0 Nov 6 00:25:10.925830 chronyd[1651]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 6 00:25:10.928777 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:25:10.933334 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:25:10.940985 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:25:10.944034 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:25:10.944421 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:25:10.947552 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:25:10.951952 extend-filesystems[1660]: Found /dev/nvme0n1p6 Nov 6 00:25:10.954529 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:25:10.962113 google_oslogin_nss_cache[1661]: oslogin_cache_refresh[1661]: Refreshing passwd entry cache Nov 6 00:25:10.960848 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:25:10.958851 oslogin_cache_refresh[1661]: Refreshing passwd entry cache Nov 6 00:25:10.963740 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:25:10.963940 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:25:10.968206 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:25:10.970111 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:25:10.974167 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:25:10.975781 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:25:10.981354 chronyd[1651]: Timezone right/UTC failed leap second check, ignoring Nov 6 00:25:10.981758 systemd[1]: Started chronyd.service - NTP client/server. Nov 6 00:25:10.981499 chronyd[1651]: Loaded seccomp filter (level 2) Nov 6 00:25:10.987343 google_oslogin_nss_cache[1661]: oslogin_cache_refresh[1661]: Failure getting users, quitting Nov 6 00:25:10.987343 google_oslogin_nss_cache[1661]: oslogin_cache_refresh[1661]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:25:10.987318 oslogin_cache_refresh[1661]: Failure getting users, quitting Nov 6 00:25:10.987333 oslogin_cache_refresh[1661]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:25:10.987538 jq[1681]: true Nov 6 00:25:10.993907 google_oslogin_nss_cache[1661]: oslogin_cache_refresh[1661]: Refreshing group entry cache Nov 6 00:25:10.990868 oslogin_cache_refresh[1661]: Refreshing group entry cache Nov 6 00:25:10.994532 (ntainerd)[1686]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:25:11.003193 extend-filesystems[1660]: Found /dev/nvme0n1p9 Nov 6 00:25:11.008426 extend-filesystems[1660]: Checking size of /dev/nvme0n1p9 Nov 6 00:25:11.022123 google_oslogin_nss_cache[1661]: oslogin_cache_refresh[1661]: Failure getting groups, quitting Nov 6 00:25:11.022123 google_oslogin_nss_cache[1661]: oslogin_cache_refresh[1661]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:25:11.022227 jq[1687]: true Nov 6 00:25:11.020675 oslogin_cache_refresh[1661]: Failure getting groups, quitting Nov 6 00:25:11.020684 oslogin_cache_refresh[1661]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:25:11.022589 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:25:11.022795 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:25:11.049134 tar[1684]: linux-amd64/LICENSE Nov 6 00:25:11.049134 tar[1684]: linux-amd64/helm Nov 6 00:25:11.053119 update_engine[1679]: I20251106 00:25:11.053046 1679 main.cc:92] Flatcar Update Engine starting Nov 6 00:25:11.060677 extend-filesystems[1660]: Old size kept for /dev/nvme0n1p9 Nov 6 00:25:11.061897 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:25:11.062091 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:25:11.080532 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:25:11.105697 systemd-logind[1674]: New seat seat0. Nov 6 00:25:11.109267 systemd-logind[1674]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:25:11.109382 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:25:11.198134 bash[1731]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:25:11.200282 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:25:11.203552 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 00:25:11.347016 dbus-daemon[1654]: [system] SELinux support is enabled Nov 6 00:25:11.347136 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:25:11.353535 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:25:11.353564 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:25:11.358988 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:25:11.359009 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:25:11.363730 dbus-daemon[1654]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 6 00:25:11.369543 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:25:11.371420 update_engine[1679]: I20251106 00:25:11.371377 1679 update_check_scheduler.cc:74] Next update check in 7m34s Nov 6 00:25:11.374158 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:25:11.446633 coreos-metadata[1653]: Nov 06 00:25:11.446 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 6 00:25:11.478794 coreos-metadata[1653]: Nov 06 00:25:11.478 INFO Fetch successful Nov 6 00:25:11.479103 coreos-metadata[1653]: Nov 06 00:25:11.479 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 6 00:25:11.496963 coreos-metadata[1653]: Nov 06 00:25:11.494 INFO Fetch successful Nov 6 00:25:11.496963 coreos-metadata[1653]: Nov 06 00:25:11.496 INFO Fetching http://168.63.129.16/machine/475afdd7-23b1-49eb-a0bc-62842f5c513c/b83eec61%2D0e58%2D4fef%2D9aa8%2Db99aa341ebbd.%5Fci%2D4459.1.0%2Dn%2D3bced53249?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 6 00:25:11.500306 coreos-metadata[1653]: Nov 06 00:25:11.500 INFO Fetch successful Nov 6 00:25:11.500570 coreos-metadata[1653]: Nov 06 00:25:11.500 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 6 00:25:11.510894 coreos-metadata[1653]: Nov 06 00:25:11.510 INFO Fetch successful Nov 6 00:25:11.554428 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 00:25:11.557199 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:25:11.692985 locksmithd[1760]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:25:11.716087 tar[1684]: linux-amd64/README.md Nov 6 00:25:11.735008 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:25:11.796139 sshd_keygen[1701]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:25:11.817779 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:25:11.821673 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:25:11.824544 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 6 00:25:11.841547 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:25:11.848002 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:25:11.851538 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:25:11.855062 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 6 00:25:11.886316 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:25:11.890362 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:25:11.896079 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:25:11.898360 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:25:11.942219 containerd[1686]: time="2025-11-06T00:25:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:25:11.942534 containerd[1686]: time="2025-11-06T00:25:11.942506931Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:25:11.952193 containerd[1686]: time="2025-11-06T00:25:11.952141775Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.553µs" Nov 6 00:25:11.952193 containerd[1686]: time="2025-11-06T00:25:11.952166301Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:25:11.952193 containerd[1686]: time="2025-11-06T00:25:11.952183720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:25:11.952323 containerd[1686]: time="2025-11-06T00:25:11.952294798Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:25:11.952323 containerd[1686]: time="2025-11-06T00:25:11.952310729Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:25:11.952364 containerd[1686]: time="2025-11-06T00:25:11.952329200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:25:11.952380 containerd[1686]: time="2025-11-06T00:25:11.952372770Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:25:11.952398 containerd[1686]: time="2025-11-06T00:25:11.952382384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:25:11.952555 containerd[1686]: time="2025-11-06T00:25:11.952531577Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:25:11.952555 containerd[1686]: time="2025-11-06T00:25:11.952545586Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:25:11.952602 containerd[1686]: time="2025-11-06T00:25:11.952554394Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:25:11.952602 containerd[1686]: time="2025-11-06T00:25:11.952561413Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:25:11.952637 containerd[1686]: time="2025-11-06T00:25:11.952615385Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:25:11.953853 containerd[1686]: time="2025-11-06T00:25:11.952756613Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:25:11.953853 containerd[1686]: time="2025-11-06T00:25:11.952777893Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:25:11.953853 containerd[1686]: time="2025-11-06T00:25:11.952786407Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:25:11.953853 containerd[1686]: time="2025-11-06T00:25:11.952813306Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:25:11.953853 containerd[1686]: time="2025-11-06T00:25:11.953002305Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:25:11.953853 containerd[1686]: time="2025-11-06T00:25:11.953035486Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:25:11.965750 containerd[1686]: time="2025-11-06T00:25:11.965718983Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:25:11.965813 containerd[1686]: time="2025-11-06T00:25:11.965776389Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:25:11.965813 containerd[1686]: time="2025-11-06T00:25:11.965792245Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:25:11.965813 containerd[1686]: time="2025-11-06T00:25:11.965804616Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:25:11.965899 containerd[1686]: time="2025-11-06T00:25:11.965817795Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:25:11.965899 containerd[1686]: time="2025-11-06T00:25:11.965828477Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:25:11.965899 containerd[1686]: time="2025-11-06T00:25:11.965839564Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:25:11.965899 containerd[1686]: time="2025-11-06T00:25:11.965851493Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:25:11.965899 containerd[1686]: time="2025-11-06T00:25:11.965861790Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:25:11.965899 containerd[1686]: time="2025-11-06T00:25:11.965873044Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:25:11.965899 containerd[1686]: time="2025-11-06T00:25:11.965895355Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:25:11.966028 containerd[1686]: time="2025-11-06T00:25:11.965908002Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:25:11.966028 containerd[1686]: time="2025-11-06T00:25:11.966008218Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:25:11.966063 containerd[1686]: time="2025-11-06T00:25:11.966026278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:25:11.966063 containerd[1686]: time="2025-11-06T00:25:11.966040667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:25:11.966063 containerd[1686]: time="2025-11-06T00:25:11.966056059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:25:11.966128 containerd[1686]: time="2025-11-06T00:25:11.966067037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:25:11.966128 containerd[1686]: time="2025-11-06T00:25:11.966080138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:25:11.966128 containerd[1686]: time="2025-11-06T00:25:11.966093344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:25:11.966128 containerd[1686]: time="2025-11-06T00:25:11.966103703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:25:11.966128 containerd[1686]: time="2025-11-06T00:25:11.966114810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:25:11.966128 containerd[1686]: time="2025-11-06T00:25:11.966124848Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:25:11.966237 containerd[1686]: time="2025-11-06T00:25:11.966135451Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:25:11.966237 containerd[1686]: time="2025-11-06T00:25:11.966196348Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:25:11.966237 containerd[1686]: time="2025-11-06T00:25:11.966209276Z" level=info msg="Start snapshots syncer" Nov 6 00:25:11.966237 containerd[1686]: time="2025-11-06T00:25:11.966228320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:25:11.968210 containerd[1686]: time="2025-11-06T00:25:11.968154032Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:25:11.968380 containerd[1686]: time="2025-11-06T00:25:11.968247551Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:25:11.968570 containerd[1686]: time="2025-11-06T00:25:11.968465246Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:25:11.968570 containerd[1686]: time="2025-11-06T00:25:11.968559545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:25:11.968628 containerd[1686]: time="2025-11-06T00:25:11.968588592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:25:11.968628 containerd[1686]: time="2025-11-06T00:25:11.968599581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:25:11.968628 containerd[1686]: time="2025-11-06T00:25:11.968609094Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:25:11.968628 containerd[1686]: time="2025-11-06T00:25:11.968620018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:25:11.968705 containerd[1686]: time="2025-11-06T00:25:11.968629915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:25:11.968705 containerd[1686]: time="2025-11-06T00:25:11.968640681Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:25:11.968705 containerd[1686]: time="2025-11-06T00:25:11.968660690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:25:11.968705 containerd[1686]: time="2025-11-06T00:25:11.968670207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:25:11.968705 containerd[1686]: time="2025-11-06T00:25:11.968679908Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:25:11.968705 containerd[1686]: time="2025-11-06T00:25:11.968700128Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:25:11.968810 containerd[1686]: time="2025-11-06T00:25:11.968711426Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:25:11.968810 containerd[1686]: time="2025-11-06T00:25:11.968718749Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:25:11.968810 containerd[1686]: time="2025-11-06T00:25:11.968731686Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:25:11.968810 containerd[1686]: time="2025-11-06T00:25:11.968739173Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:25:11.968810 containerd[1686]: time="2025-11-06T00:25:11.968751073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:25:11.968810 containerd[1686]: time="2025-11-06T00:25:11.968760854Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:25:11.968810 containerd[1686]: time="2025-11-06T00:25:11.968775340Z" level=info msg="runtime interface created" Nov 6 00:25:11.968810 containerd[1686]: time="2025-11-06T00:25:11.968780616Z" level=info msg="created NRI interface" Nov 6 00:25:11.968810 containerd[1686]: time="2025-11-06T00:25:11.968787909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:25:11.968810 containerd[1686]: time="2025-11-06T00:25:11.968799022Z" level=info msg="Connect containerd service" Nov 6 00:25:11.969002 containerd[1686]: time="2025-11-06T00:25:11.968821049Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:25:11.969650 containerd[1686]: time="2025-11-06T00:25:11.969599707Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:25:12.284586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:12.502380 containerd[1686]: time="2025-11-06T00:25:12.502315375Z" level=info msg="Start subscribing containerd event" Nov 6 00:25:12.502643 containerd[1686]: time="2025-11-06T00:25:12.502483505Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:25:12.502643 containerd[1686]: time="2025-11-06T00:25:12.502496735Z" level=info msg="Start recovering state" Nov 6 00:25:12.502643 containerd[1686]: time="2025-11-06T00:25:12.502521463Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:25:12.502643 containerd[1686]: time="2025-11-06T00:25:12.502600707Z" level=info msg="Start event monitor" Nov 6 00:25:12.502643 containerd[1686]: time="2025-11-06T00:25:12.502614664Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:25:12.502643 containerd[1686]: time="2025-11-06T00:25:12.502621766Z" level=info msg="Start streaming server" Nov 6 00:25:12.502643 containerd[1686]: time="2025-11-06T00:25:12.502632132Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:25:12.504912 containerd[1686]: time="2025-11-06T00:25:12.502805444Z" level=info msg="runtime interface starting up..." Nov 6 00:25:12.504912 containerd[1686]: time="2025-11-06T00:25:12.502813364Z" level=info msg="starting plugins..." Nov 6 00:25:12.504912 containerd[1686]: time="2025-11-06T00:25:12.502824829Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:25:12.503039 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:25:12.505219 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:25:12.507245 containerd[1686]: time="2025-11-06T00:25:12.502944179Z" level=info msg="containerd successfully booted in 0.561791s" Nov 6 00:25:12.509257 systemd[1]: Startup finished in 2.910s (kernel) + 12.732s (initrd) + 17.005s (userspace) = 32.648s. Nov 6 00:25:12.555740 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:25:13.006021 kubelet[1817]: E1106 00:25:13.005908 1817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:25:13.007923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:25:13.008064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:25:13.008504 systemd[1]: kubelet.service: Consumed 811ms CPU time, 257.5M memory peak. Nov 6 00:25:13.180046 login[1800]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 6 00:25:13.196507 login[1801]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 00:25:13.202470 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:25:13.203552 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:25:13.209576 systemd-logind[1674]: New session 1 of user core. Nov 6 00:25:13.235178 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:25:13.239093 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:25:13.260833 (systemd)[1834]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:25:13.262752 systemd-logind[1674]: New session c1 of user core. Nov 6 00:25:13.638347 systemd[1834]: Queued start job for default target default.target. Nov 6 00:25:13.646167 systemd[1834]: Created slice app.slice - User Application Slice. Nov 6 00:25:13.646203 systemd[1834]: Reached target paths.target - Paths. Nov 6 00:25:13.646236 systemd[1834]: Reached target timers.target - Timers. Nov 6 00:25:13.647504 systemd[1834]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:25:13.655534 systemd[1834]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:25:13.655578 systemd[1834]: Reached target sockets.target - Sockets. Nov 6 00:25:13.655608 systemd[1834]: Reached target basic.target - Basic System. Nov 6 00:25:13.655667 systemd[1834]: Reached target default.target - Main User Target. Nov 6 00:25:13.655690 systemd[1834]: Startup finished in 388ms. Nov 6 00:25:13.655754 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:25:13.663999 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:25:13.972246 waagent[1797]: 2025-11-06T00:25:13.972146Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.972688Z INFO Daemon Daemon OS: flatcar 4459.1.0 Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.973053Z INFO Daemon Daemon Python: 3.11.13 Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.973511Z INFO Daemon Daemon Run daemon Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.973824Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.1.0' Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.974140Z INFO Daemon Daemon Using waagent for provisioning Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.974304Z INFO Daemon Daemon Activate resource disk Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.974530Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.976029Z INFO Daemon Daemon Found device: None Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.976320Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.976564Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.977084Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 6 00:25:13.983628 waagent[1797]: 2025-11-06T00:25:13.977418Z INFO Daemon Daemon Running default provisioning handler Nov 6 00:25:13.988339 waagent[1797]: 2025-11-06T00:25:13.988084Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 6 00:25:13.998755 waagent[1797]: 2025-11-06T00:25:13.998713Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 6 00:25:13.999336 waagent[1797]: 2025-11-06T00:25:13.998990Z INFO Daemon Daemon cloud-init is enabled: False Nov 6 00:25:13.999336 waagent[1797]: 2025-11-06T00:25:13.999084Z INFO Daemon Daemon Copying ovf-env.xml Nov 6 00:25:14.142289 waagent[1797]: 2025-11-06T00:25:14.141004Z INFO Daemon Daemon Successfully mounted dvd Nov 6 00:25:14.165800 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 6 00:25:14.167407 waagent[1797]: 2025-11-06T00:25:14.167359Z INFO Daemon Daemon Detect protocol endpoint Nov 6 00:25:14.168612 waagent[1797]: 2025-11-06T00:25:14.168130Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 6 00:25:14.170061 waagent[1797]: 2025-11-06T00:25:14.170035Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 6 00:25:14.171571 waagent[1797]: 2025-11-06T00:25:14.171548Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 6 00:25:14.172810 waagent[1797]: 2025-11-06T00:25:14.172786Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 6 00:25:14.173274 waagent[1797]: 2025-11-06T00:25:14.173251Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 6 00:25:14.180341 login[1800]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 00:25:14.183759 systemd-logind[1674]: New session 2 of user core. Nov 6 00:25:14.186068 waagent[1797]: 2025-11-06T00:25:14.186043Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 6 00:25:14.189907 waagent[1797]: 2025-11-06T00:25:14.186700Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 6 00:25:14.189907 waagent[1797]: 2025-11-06T00:25:14.186906Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 6 00:25:14.188294 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:25:14.306087 waagent[1797]: 2025-11-06T00:25:14.306003Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 6 00:25:14.307739 waagent[1797]: 2025-11-06T00:25:14.307656Z INFO Daemon Daemon Forcing an update of the goal state. Nov 6 00:25:14.314663 waagent[1797]: 2025-11-06T00:25:14.314634Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 6 00:25:14.330371 waagent[1797]: 2025-11-06T00:25:14.330345Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 6 00:25:14.331824 waagent[1797]: 2025-11-06T00:25:14.331792Z INFO Daemon Nov 6 00:25:14.332517 waagent[1797]: 2025-11-06T00:25:14.332490Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e1536921-85bf-47d5-b87c-69cae837f0dc eTag: 11695194790821249054 source: Fabric] Nov 6 00:25:14.333371 waagent[1797]: 2025-11-06T00:25:14.333343Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 6 00:25:14.334062 waagent[1797]: 2025-11-06T00:25:14.333675Z INFO Daemon Nov 6 00:25:14.336583 waagent[1797]: 2025-11-06T00:25:14.336336Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 6 00:25:14.347647 waagent[1797]: 2025-11-06T00:25:14.347623Z INFO Daemon Daemon Downloading artifacts profile blob Nov 6 00:25:14.415651 waagent[1797]: 2025-11-06T00:25:14.415603Z INFO Daemon Downloaded certificate {'thumbprint': 'D5EA64DEE6A787FF21B4124103A679E6E1B177E1', 'hasPrivateKey': True} Nov 6 00:25:14.419349 waagent[1797]: 2025-11-06T00:25:14.416571Z INFO Daemon Fetch goal state completed Nov 6 00:25:14.424580 waagent[1797]: 2025-11-06T00:25:14.424515Z INFO Daemon Daemon Starting provisioning Nov 6 00:25:14.425601 waagent[1797]: 2025-11-06T00:25:14.425047Z INFO Daemon Daemon Handle ovf-env.xml. Nov 6 00:25:14.425601 waagent[1797]: 2025-11-06T00:25:14.425351Z INFO Daemon Daemon Set hostname [ci-4459.1.0-n-3bced53249] Nov 6 00:25:14.453721 waagent[1797]: 2025-11-06T00:25:14.453687Z INFO Daemon Daemon Publish hostname [ci-4459.1.0-n-3bced53249] Nov 6 00:25:14.458433 waagent[1797]: 2025-11-06T00:25:14.454294Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 6 00:25:14.458433 waagent[1797]: 2025-11-06T00:25:14.454641Z INFO Daemon Daemon Primary interface is [eth0] Nov 6 00:25:14.462301 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:25:14.462308 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:25:14.462328 systemd-networkd[1488]: eth0: DHCP lease lost Nov 6 00:25:14.463171 waagent[1797]: 2025-11-06T00:25:14.463131Z INFO Daemon Daemon Create user account if not exists Nov 6 00:25:14.464572 waagent[1797]: 2025-11-06T00:25:14.463749Z INFO Daemon Daemon User core already exists, skip useradd Nov 6 00:25:14.464572 waagent[1797]: 2025-11-06T00:25:14.464130Z INFO Daemon Daemon Configure sudoer Nov 6 00:25:14.470179 waagent[1797]: 2025-11-06T00:25:14.470134Z INFO Daemon Daemon Configure sshd Nov 6 00:25:14.474290 waagent[1797]: 2025-11-06T00:25:14.474201Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 6 00:25:14.477734 waagent[1797]: 2025-11-06T00:25:14.477695Z INFO Daemon Daemon Deploy ssh public key. Nov 6 00:25:14.479951 systemd-networkd[1488]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 6 00:25:15.593270 waagent[1797]: 2025-11-06T00:25:15.593224Z INFO Daemon Daemon Provisioning complete Nov 6 00:25:15.602620 waagent[1797]: 2025-11-06T00:25:15.602585Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 6 00:25:15.605747 waagent[1797]: 2025-11-06T00:25:15.603230Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 6 00:25:15.605747 waagent[1797]: 2025-11-06T00:25:15.603556Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 6 00:25:15.700155 waagent[1884]: 2025-11-06T00:25:15.700089Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 6 00:25:15.700462 waagent[1884]: 2025-11-06T00:25:15.700185Z INFO ExtHandler ExtHandler OS: flatcar 4459.1.0 Nov 6 00:25:15.700462 waagent[1884]: 2025-11-06T00:25:15.700224Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 6 00:25:15.700462 waagent[1884]: 2025-11-06T00:25:15.700261Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 6 00:25:15.766226 waagent[1884]: 2025-11-06T00:25:15.766174Z INFO ExtHandler ExtHandler Distro: flatcar-4459.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 6 00:25:15.766363 waagent[1884]: 2025-11-06T00:25:15.766338Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 00:25:15.766428 waagent[1884]: 2025-11-06T00:25:15.766391Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 00:25:15.773991 waagent[1884]: 2025-11-06T00:25:15.773942Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 6 00:25:15.782439 waagent[1884]: 2025-11-06T00:25:15.782410Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 6 00:25:15.782778 waagent[1884]: 2025-11-06T00:25:15.782749Z INFO ExtHandler Nov 6 00:25:15.782813 waagent[1884]: 2025-11-06T00:25:15.782800Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3a88bdf0-b4a8-4e2d-898c-ac64ad1ccdf2 eTag: 11695194790821249054 source: Fabric] Nov 6 00:25:15.783044 waagent[1884]: 2025-11-06T00:25:15.783023Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 6 00:25:15.783347 waagent[1884]: 2025-11-06T00:25:15.783324Z INFO ExtHandler Nov 6 00:25:15.783385 waagent[1884]: 2025-11-06T00:25:15.783361Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 6 00:25:15.786241 waagent[1884]: 2025-11-06T00:25:15.786212Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 6 00:25:15.853671 waagent[1884]: 2025-11-06T00:25:15.853593Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D5EA64DEE6A787FF21B4124103A679E6E1B177E1', 'hasPrivateKey': True} Nov 6 00:25:15.854005 waagent[1884]: 2025-11-06T00:25:15.853975Z INFO ExtHandler Fetch goal state completed Nov 6 00:25:15.871143 waagent[1884]: 2025-11-06T00:25:15.871100Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 6 00:25:15.874813 waagent[1884]: 2025-11-06T00:25:15.874765Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1884 Nov 6 00:25:15.874922 waagent[1884]: 2025-11-06T00:25:15.874875Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 6 00:25:15.875134 waagent[1884]: 2025-11-06T00:25:15.875112Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 6 00:25:15.876066 waagent[1884]: 2025-11-06T00:25:15.876039Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] Nov 6 00:25:15.876333 waagent[1884]: 2025-11-06T00:25:15.876310Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 6 00:25:15.876434 waagent[1884]: 2025-11-06T00:25:15.876414Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 6 00:25:15.876775 waagent[1884]: 2025-11-06T00:25:15.876755Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 6 00:25:15.949589 waagent[1884]: 2025-11-06T00:25:15.949565Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 6 00:25:15.949720 waagent[1884]: 2025-11-06T00:25:15.949700Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 6 00:25:15.954732 waagent[1884]: 2025-11-06T00:25:15.954589Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 6 00:25:15.959176 systemd[1]: Reload requested from client PID 1899 ('systemctl') (unit waagent.service)... Nov 6 00:25:15.959187 systemd[1]: Reloading... Nov 6 00:25:16.029905 zram_generator::config[1947]: No configuration found. Nov 6 00:25:16.186291 systemd[1]: Reloading finished in 226 ms. Nov 6 00:25:16.198035 waagent[1884]: 2025-11-06T00:25:16.196340Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 6 00:25:16.198035 waagent[1884]: 2025-11-06T00:25:16.196499Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 6 00:25:16.233403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#84 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Nov 6 00:25:16.697453 waagent[1884]: 2025-11-06T00:25:16.697382Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 6 00:25:16.697736 waagent[1884]: 2025-11-06T00:25:16.697706Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 6 00:25:16.698469 waagent[1884]: 2025-11-06T00:25:16.698429Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 00:25:16.698684 waagent[1884]: 2025-11-06T00:25:16.698470Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 6 00:25:16.698684 waagent[1884]: 2025-11-06T00:25:16.698612Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 00:25:16.698983 waagent[1884]: 2025-11-06T00:25:16.698956Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 6 00:25:16.699276 waagent[1884]: 2025-11-06T00:25:16.699248Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 6 00:25:16.699566 waagent[1884]: 2025-11-06T00:25:16.699508Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 6 00:25:16.699601 waagent[1884]: 2025-11-06T00:25:16.699571Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 6 00:25:16.699810 waagent[1884]: 2025-11-06T00:25:16.699785Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 6 00:25:16.699871 waagent[1884]: 2025-11-06T00:25:16.699844Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 6 00:25:16.699937 waagent[1884]: 2025-11-06T00:25:16.699909Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 6 00:25:16.700168 waagent[1884]: 2025-11-06T00:25:16.700148Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 6 00:25:16.700298 waagent[1884]: 2025-11-06T00:25:16.700272Z INFO EnvHandler ExtHandler Configure routes Nov 6 00:25:16.700538 waagent[1884]: 2025-11-06T00:25:16.700323Z INFO EnvHandler ExtHandler Gateway:None Nov 6 00:25:16.700538 waagent[1884]: 2025-11-06T00:25:16.700353Z INFO EnvHandler ExtHandler Routes:None Nov 6 00:25:16.700964 waagent[1884]: 2025-11-06T00:25:16.700929Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 6 00:25:16.700964 waagent[1884]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 6 00:25:16.700964 waagent[1884]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 6 00:25:16.700964 waagent[1884]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 6 00:25:16.700964 waagent[1884]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 6 00:25:16.700964 waagent[1884]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 6 00:25:16.700964 waagent[1884]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 6 00:25:16.701913 waagent[1884]: 2025-11-06T00:25:16.700975Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 6 00:25:16.711798 waagent[1884]: 2025-11-06T00:25:16.711757Z INFO ExtHandler ExtHandler Nov 6 00:25:16.711853 waagent[1884]: 2025-11-06T00:25:16.711822Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2c756682-946b-453c-9f7e-5099192f6eef correlation 4b5d60bc-d1c0-41a6-9861-e08769c63787 created: 2025-11-06T00:23:58.399834Z] Nov 6 00:25:16.712120 waagent[1884]: 2025-11-06T00:25:16.712093Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 6 00:25:16.712491 waagent[1884]: 2025-11-06T00:25:16.712467Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 6 00:25:16.763251 waagent[1884]: 2025-11-06T00:25:16.763211Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 6 00:25:16.763251 waagent[1884]: Try `iptables -h' or 'iptables --help' for more information.) Nov 6 00:25:16.763710 waagent[1884]: 2025-11-06T00:25:16.763681Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 56C5F561-8D36-4923-A874-34E46D874426;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 6 00:25:16.796292 waagent[1884]: 2025-11-06T00:25:16.796249Z INFO MonitorHandler ExtHandler Network interfaces: Nov 6 00:25:16.796292 waagent[1884]: Executing ['ip', '-a', '-o', 'link']: Nov 6 00:25:16.796292 waagent[1884]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 6 00:25:16.796292 waagent[1884]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:72:f0:9f brd ff:ff:ff:ff:ff:ff\ alias Network Device Nov 6 00:25:16.796292 waagent[1884]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:72:f0:9f brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Nov 6 00:25:16.796292 waagent[1884]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 6 00:25:16.796292 waagent[1884]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 6 00:25:16.796292 waagent[1884]: 2: eth0 inet 10.200.8.20/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 6 00:25:16.796292 waagent[1884]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 6 00:25:16.796292 waagent[1884]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 6 00:25:16.796292 waagent[1884]: 2: eth0 inet6 fe80::7eed:8dff:fe72:f09f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 6 00:25:16.875452 waagent[1884]: 2025-11-06T00:25:16.875406Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 6 00:25:16.875452 waagent[1884]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:25:16.875452 waagent[1884]: pkts bytes target prot opt in out source destination Nov 6 00:25:16.875452 waagent[1884]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:25:16.875452 waagent[1884]: pkts bytes target prot opt in out source destination Nov 6 00:25:16.875452 waagent[1884]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:25:16.875452 waagent[1884]: pkts bytes target prot opt in out source destination Nov 6 00:25:16.875452 waagent[1884]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 6 00:25:16.875452 waagent[1884]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 6 00:25:16.875452 waagent[1884]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 6 00:25:16.877946 waagent[1884]: 2025-11-06T00:25:16.877866Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 6 00:25:16.877946 waagent[1884]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:25:16.877946 waagent[1884]: pkts bytes target prot opt in out source destination Nov 6 00:25:16.877946 waagent[1884]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:25:16.877946 waagent[1884]: pkts bytes target prot opt in out source destination Nov 6 00:25:16.877946 waagent[1884]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 6 00:25:16.877946 waagent[1884]: pkts bytes target prot opt in out source destination Nov 6 00:25:16.877946 waagent[1884]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 6 00:25:16.877946 waagent[1884]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 6 00:25:16.877946 waagent[1884]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 6 00:25:23.258799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:25:23.260231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:23.779737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:23.788133 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:25:23.816412 kubelet[2037]: E1106 00:25:23.816384 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:25:23.818801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:25:23.818958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:25:23.819186 systemd[1]: kubelet.service: Consumed 118ms CPU time, 110.1M memory peak. Nov 6 00:25:33.927064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:25:33.928404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:34.425945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:34.428965 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:25:34.459204 kubelet[2052]: E1106 00:25:34.459174 2052 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:25:34.460740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:25:34.460873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:25:34.461207 systemd[1]: kubelet.service: Consumed 116ms CPU time, 110.1M memory peak. Nov 6 00:25:34.768219 chronyd[1651]: Selected source PHC0 Nov 6 00:25:44.618723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:25:44.619416 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:25:44.620256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:44.623094 systemd[1]: Started sshd@0-10.200.8.20:22-10.200.16.10:41764.service - OpenSSH per-connection server daemon (10.200.16.10:41764). Nov 6 00:25:45.160729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:45.165155 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:25:45.194599 kubelet[2071]: E1106 00:25:45.194548 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:25:45.195917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:25:45.196028 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:25:45.196284 systemd[1]: kubelet.service: Consumed 112ms CPU time, 108.2M memory peak. Nov 6 00:25:45.392456 sshd[2061]: Accepted publickey for core from 10.200.16.10 port 41764 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:25:45.393446 sshd-session[2061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:45.397015 systemd-logind[1674]: New session 3 of user core. Nov 6 00:25:45.407993 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:25:45.946492 systemd[1]: Started sshd@1-10.200.8.20:22-10.200.16.10:41772.service - OpenSSH per-connection server daemon (10.200.16.10:41772). Nov 6 00:25:46.577985 sshd[2081]: Accepted publickey for core from 10.200.16.10 port 41772 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:25:46.579012 sshd-session[2081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:46.583253 systemd-logind[1674]: New session 4 of user core. Nov 6 00:25:46.589019 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:25:47.023245 sshd[2084]: Connection closed by 10.200.16.10 port 41772 Nov 6 00:25:47.023810 sshd-session[2081]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:47.026550 systemd[1]: sshd@1-10.200.8.20:22-10.200.16.10:41772.service: Deactivated successfully. Nov 6 00:25:47.028055 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:25:47.029494 systemd-logind[1674]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:25:47.030275 systemd-logind[1674]: Removed session 4. Nov 6 00:25:47.140095 systemd[1]: Started sshd@2-10.200.8.20:22-10.200.16.10:41774.service - OpenSSH per-connection server daemon (10.200.16.10:41774). Nov 6 00:25:47.772843 sshd[2090]: Accepted publickey for core from 10.200.16.10 port 41774 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:25:47.773834 sshd-session[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:47.778067 systemd-logind[1674]: New session 5 of user core. Nov 6 00:25:47.784038 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:25:48.214990 sshd[2093]: Connection closed by 10.200.16.10 port 41774 Nov 6 00:25:48.215485 sshd-session[2090]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:48.218100 systemd[1]: sshd@2-10.200.8.20:22-10.200.16.10:41774.service: Deactivated successfully. Nov 6 00:25:48.219431 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:25:48.220540 systemd-logind[1674]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:25:48.221584 systemd-logind[1674]: Removed session 5. Nov 6 00:25:48.343245 systemd[1]: Started sshd@3-10.200.8.20:22-10.200.16.10:41786.service - OpenSSH per-connection server daemon (10.200.16.10:41786). Nov 6 00:25:48.973598 sshd[2099]: Accepted publickey for core from 10.200.16.10 port 41786 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:25:48.974664 sshd-session[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:48.978599 systemd-logind[1674]: New session 6 of user core. Nov 6 00:25:48.987001 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:25:49.418109 sshd[2102]: Connection closed by 10.200.16.10 port 41786 Nov 6 00:25:49.418802 sshd-session[2099]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:49.421394 systemd[1]: sshd@3-10.200.8.20:22-10.200.16.10:41786.service: Deactivated successfully. Nov 6 00:25:49.422844 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:25:49.424419 systemd-logind[1674]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:25:49.425155 systemd-logind[1674]: Removed session 6. Nov 6 00:25:49.528278 systemd[1]: Started sshd@4-10.200.8.20:22-10.200.16.10:41800.service - OpenSSH per-connection server daemon (10.200.16.10:41800). Nov 6 00:25:50.164922 sshd[2108]: Accepted publickey for core from 10.200.16.10 port 41800 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:25:50.166058 sshd-session[2108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:50.170248 systemd-logind[1674]: New session 7 of user core. Nov 6 00:25:50.175025 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:25:50.750804 sudo[2112]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:25:50.751012 sudo[2112]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:25:50.774539 sudo[2112]: pam_unix(sudo:session): session closed for user root Nov 6 00:25:50.877007 sshd[2111]: Connection closed by 10.200.16.10 port 41800 Nov 6 00:25:50.877616 sshd-session[2108]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:50.881037 systemd[1]: sshd@4-10.200.8.20:22-10.200.16.10:41800.service: Deactivated successfully. Nov 6 00:25:50.882478 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:25:50.883140 systemd-logind[1674]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:25:50.884266 systemd-logind[1674]: Removed session 7. Nov 6 00:25:50.986218 systemd[1]: Started sshd@5-10.200.8.20:22-10.200.16.10:42608.service - OpenSSH per-connection server daemon (10.200.16.10:42608). Nov 6 00:25:51.618442 sshd[2118]: Accepted publickey for core from 10.200.16.10 port 42608 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:25:51.620765 sshd-session[2118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:51.624922 systemd-logind[1674]: New session 8 of user core. Nov 6 00:25:51.635039 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:25:51.961246 sudo[2123]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:25:51.961470 sudo[2123]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:25:51.967543 sudo[2123]: pam_unix(sudo:session): session closed for user root Nov 6 00:25:51.971201 sudo[2122]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:25:51.971389 sudo[2122]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:25:51.978256 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:25:52.006391 augenrules[2145]: No rules Nov 6 00:25:52.007272 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:25:52.007426 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:25:52.008038 sudo[2122]: pam_unix(sudo:session): session closed for user root Nov 6 00:25:52.109611 sshd[2121]: Connection closed by 10.200.16.10 port 42608 Nov 6 00:25:52.110038 sshd-session[2118]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:52.112465 systemd[1]: sshd@5-10.200.8.20:22-10.200.16.10:42608.service: Deactivated successfully. Nov 6 00:25:52.113800 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:25:52.115088 systemd-logind[1674]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:25:52.115979 systemd-logind[1674]: Removed session 8. Nov 6 00:25:52.223965 systemd[1]: Started sshd@6-10.200.8.20:22-10.200.16.10:42620.service - OpenSSH per-connection server daemon (10.200.16.10:42620). Nov 6 00:25:52.621227 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 6 00:25:52.855966 sshd[2154]: Accepted publickey for core from 10.200.16.10 port 42620 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:25:52.857002 sshd-session[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:52.861204 systemd-logind[1674]: New session 9 of user core. Nov 6 00:25:52.866044 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:25:53.200133 sudo[2158]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:25:53.200334 sudo[2158]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:25:54.819134 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:25:54.828168 (dockerd)[2175]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:25:55.426787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 6 00:25:55.428261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:25:56.007507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:25:56.015115 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:25:56.048935 kubelet[2188]: E1106 00:25:56.048871 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:25:56.050199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:25:56.050311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:25:56.050575 systemd[1]: kubelet.service: Consumed 119ms CPU time, 109M memory peak. Nov 6 00:25:56.188515 dockerd[2175]: time="2025-11-06T00:25:56.188469578Z" level=info msg="Starting up" Nov 6 00:25:56.189336 dockerd[2175]: time="2025-11-06T00:25:56.189263358Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:25:56.199440 dockerd[2175]: time="2025-11-06T00:25:56.199399535Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:25:56.328602 dockerd[2175]: time="2025-11-06T00:25:56.328419113Z" level=info msg="Loading containers: start." Nov 6 00:25:56.381901 kernel: Initializing XFRM netlink socket Nov 6 00:25:56.832739 systemd-networkd[1488]: docker0: Link UP Nov 6 00:25:56.848541 dockerd[2175]: time="2025-11-06T00:25:56.848509784Z" level=info msg="Loading containers: done." Nov 6 00:25:56.860972 update_engine[1679]: I20251106 00:25:56.860929 1679 update_attempter.cc:509] Updating boot flags... Nov 6 00:25:56.926203 dockerd[2175]: time="2025-11-06T00:25:56.924618047Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:25:56.926203 dockerd[2175]: time="2025-11-06T00:25:56.924716882Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:25:56.926203 dockerd[2175]: time="2025-11-06T00:25:56.924791676Z" level=info msg="Initializing buildkit" Nov 6 00:25:56.984704 dockerd[2175]: time="2025-11-06T00:25:56.984675288Z" level=info msg="Completed buildkit initialization" Nov 6 00:25:56.990459 dockerd[2175]: time="2025-11-06T00:25:56.990428724Z" level=info msg="Daemon has completed initialization" Nov 6 00:25:56.990541 dockerd[2175]: time="2025-11-06T00:25:56.990476648Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:25:56.990711 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:25:57.736859 containerd[1686]: time="2025-11-06T00:25:57.736819547Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 6 00:25:58.501358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860136492.mount: Deactivated successfully. Nov 6 00:25:59.605150 containerd[1686]: time="2025-11-06T00:25:59.605100647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:59.608203 containerd[1686]: time="2025-11-06T00:25:59.608167889Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065400" Nov 6 00:25:59.611471 containerd[1686]: time="2025-11-06T00:25:59.611435290Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:59.616071 containerd[1686]: time="2025-11-06T00:25:59.615851397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:25:59.616501 containerd[1686]: time="2025-11-06T00:25:59.616480427Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.879627845s" Nov 6 00:25:59.616536 containerd[1686]: time="2025-11-06T00:25:59.616514394Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 6 00:25:59.617323 containerd[1686]: time="2025-11-06T00:25:59.617300860Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 6 00:26:00.946800 containerd[1686]: time="2025-11-06T00:26:00.946756683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:00.949508 containerd[1686]: time="2025-11-06T00:26:00.949390821Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159765" Nov 6 00:26:00.952127 containerd[1686]: time="2025-11-06T00:26:00.952104554Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:00.958280 containerd[1686]: time="2025-11-06T00:26:00.957514355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:00.958280 containerd[1686]: time="2025-11-06T00:26:00.958085064Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.340759341s" Nov 6 00:26:00.958280 containerd[1686]: time="2025-11-06T00:26:00.958156418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 6 00:26:00.958702 containerd[1686]: time="2025-11-06T00:26:00.958679499Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 6 00:26:01.899591 containerd[1686]: time="2025-11-06T00:26:01.899549809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:01.901796 containerd[1686]: time="2025-11-06T00:26:01.901766131Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725101" Nov 6 00:26:01.904399 containerd[1686]: time="2025-11-06T00:26:01.904362570Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:01.911000 containerd[1686]: time="2025-11-06T00:26:01.910966065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:01.911564 containerd[1686]: time="2025-11-06T00:26:01.911542173Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 952.842433ms" Nov 6 00:26:01.911598 containerd[1686]: time="2025-11-06T00:26:01.911565444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 6 00:26:01.912092 containerd[1686]: time="2025-11-06T00:26:01.912065263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 6 00:26:02.704646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286420269.mount: Deactivated successfully. Nov 6 00:26:02.945494 containerd[1686]: time="2025-11-06T00:26:02.945453122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:02.947814 containerd[1686]: time="2025-11-06T00:26:02.947784705Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964707" Nov 6 00:26:02.950167 containerd[1686]: time="2025-11-06T00:26:02.950132740Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:02.953221 containerd[1686]: time="2025-11-06T00:26:02.953184102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:02.953700 containerd[1686]: time="2025-11-06T00:26:02.953485005Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.041396828s" Nov 6 00:26:02.953700 containerd[1686]: time="2025-11-06T00:26:02.953520397Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 6 00:26:02.953935 containerd[1686]: time="2025-11-06T00:26:02.953918124Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 6 00:26:03.512398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3182957631.mount: Deactivated successfully. Nov 6 00:26:04.357307 containerd[1686]: time="2025-11-06T00:26:04.357264534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:04.360512 containerd[1686]: time="2025-11-06T00:26:04.360488496Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Nov 6 00:26:04.363515 containerd[1686]: time="2025-11-06T00:26:04.363479093Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:04.367100 containerd[1686]: time="2025-11-06T00:26:04.367056466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:04.367922 containerd[1686]: time="2025-11-06T00:26:04.367613893Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.413668431s" Nov 6 00:26:04.367922 containerd[1686]: time="2025-11-06T00:26:04.367640781Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 6 00:26:04.368497 containerd[1686]: time="2025-11-06T00:26:04.368471387Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 6 00:26:04.925236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814613055.mount: Deactivated successfully. Nov 6 00:26:04.943335 containerd[1686]: time="2025-11-06T00:26:04.943297912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:04.945615 containerd[1686]: time="2025-11-06T00:26:04.945581521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Nov 6 00:26:04.948180 containerd[1686]: time="2025-11-06T00:26:04.948146119Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:04.951313 containerd[1686]: time="2025-11-06T00:26:04.951272897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:04.951945 containerd[1686]: time="2025-11-06T00:26:04.951633802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 583.137508ms" Nov 6 00:26:04.951945 containerd[1686]: time="2025-11-06T00:26:04.951663993Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 6 00:26:04.952243 containerd[1686]: time="2025-11-06T00:26:04.952215102Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 6 00:26:06.176845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 6 00:26:06.178193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:07.021874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:07.026159 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:26:07.055907 kubelet[2593]: E1106 00:26:07.055853 2593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:26:07.057017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:26:07.057126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:26:07.057510 systemd[1]: kubelet.service: Consumed 119ms CPU time, 110.5M memory peak. Nov 6 00:26:08.416792 containerd[1686]: time="2025-11-06T00:26:08.416742897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:08.419206 containerd[1686]: time="2025-11-06T00:26:08.419034239Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514601" Nov 6 00:26:08.421651 containerd[1686]: time="2025-11-06T00:26:08.421616175Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:08.425376 containerd[1686]: time="2025-11-06T00:26:08.425349411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:08.426521 containerd[1686]: time="2025-11-06T00:26:08.426023152Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.473734435s" Nov 6 00:26:08.426521 containerd[1686]: time="2025-11-06T00:26:08.426055107Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 6 00:26:11.256181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:11.256648 systemd[1]: kubelet.service: Consumed 119ms CPU time, 110.5M memory peak. Nov 6 00:26:11.258415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:11.280784 systemd[1]: Reload requested from client PID 2633 ('systemctl') (unit session-9.scope)... Nov 6 00:26:11.280793 systemd[1]: Reloading... Nov 6 00:26:11.385933 zram_generator::config[2687]: No configuration found. Nov 6 00:26:11.551061 systemd[1]: Reloading finished in 269 ms. Nov 6 00:26:11.656377 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:26:11.656461 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:26:11.656705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:11.656774 systemd[1]: kubelet.service: Consumed 77ms CPU time, 87.5M memory peak. Nov 6 00:26:11.658582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:12.276007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:12.285138 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:26:12.318904 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:26:12.318904 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:26:12.319124 kubelet[2747]: I1106 00:26:12.318943 2747 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:26:12.736831 kubelet[2747]: I1106 00:26:12.736798 2747 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:26:12.736831 kubelet[2747]: I1106 00:26:12.736820 2747 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:26:12.736831 kubelet[2747]: I1106 00:26:12.736842 2747 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:26:12.737014 kubelet[2747]: I1106 00:26:12.736850 2747 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:26:12.737079 kubelet[2747]: I1106 00:26:12.737067 2747 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:26:12.745911 kubelet[2747]: E1106 00:26:12.744730 2747 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:26:12.747571 kubelet[2747]: I1106 00:26:12.747551 2747 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:26:12.750348 kubelet[2747]: I1106 00:26:12.750335 2747 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:26:12.753164 kubelet[2747]: I1106 00:26:12.753152 2747 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:26:12.753445 kubelet[2747]: I1106 00:26:12.753433 2747 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:26:12.753597 kubelet[2747]: I1106 00:26:12.753479 2747 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-3bced53249","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:26:12.753713 kubelet[2747]: I1106 00:26:12.753708 2747 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:26:12.753740 kubelet[2747]: I1106 00:26:12.753737 2747 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:26:12.753825 kubelet[2747]: I1106 00:26:12.753821 2747 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:26:12.760305 kubelet[2747]: I1106 00:26:12.760295 2747 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:26:12.760494 kubelet[2747]: I1106 00:26:12.760487 2747 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:26:12.760540 kubelet[2747]: I1106 00:26:12.760535 2747 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:26:12.760579 kubelet[2747]: I1106 00:26:12.760576 2747 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:26:12.760622 kubelet[2747]: I1106 00:26:12.760618 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:26:12.763564 kubelet[2747]: E1106 00:26:12.763543 2747 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-3bced53249&limit=500&resourceVersion=0\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:26:12.763712 kubelet[2747]: E1106 00:26:12.763702 2747 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:26:12.764028 kubelet[2747]: I1106 00:26:12.764019 2747 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:26:12.764487 kubelet[2747]: I1106 00:26:12.764477 2747 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:26:12.764550 kubelet[2747]: I1106 00:26:12.764545 2747 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:26:12.764609 kubelet[2747]: W1106 00:26:12.764604 2747 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:26:12.767614 kubelet[2747]: I1106 00:26:12.767602 2747 server.go:1262] "Started kubelet" Nov 6 00:26:12.768273 kubelet[2747]: I1106 00:26:12.768261 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:26:12.772199 kubelet[2747]: E1106 00:26:12.770685 2747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-3bced53249.1875434054f2b849 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-3bced53249,UID:ci-4459.1.0-n-3bced53249,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-3bced53249,},FirstTimestamp:2025-11-06 00:26:12.767578185 +0000 UTC m=+0.479881008,LastTimestamp:2025-11-06 00:26:12.767578185 +0000 UTC m=+0.479881008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-3bced53249,}" Nov 6 00:26:12.772744 kubelet[2747]: I1106 00:26:12.772522 2747 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:26:12.773744 kubelet[2747]: I1106 00:26:12.773721 2747 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:26:12.776456 kubelet[2747]: I1106 00:26:12.776427 2747 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:26:12.776523 kubelet[2747]: I1106 00:26:12.776469 2747 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:26:12.776912 kubelet[2747]: I1106 00:26:12.776590 2747 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:26:12.776912 kubelet[2747]: I1106 00:26:12.776786 2747 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:26:12.778480 kubelet[2747]: I1106 00:26:12.778469 2747 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:26:12.778701 kubelet[2747]: E1106 00:26:12.778689 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:12.778782 kubelet[2747]: I1106 00:26:12.778776 2747 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:26:12.778866 kubelet[2747]: I1106 00:26:12.778861 2747 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:26:12.779375 kubelet[2747]: E1106 00:26:12.779360 2747 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:26:12.781021 kubelet[2747]: E1106 00:26:12.780992 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-3bced53249?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="200ms" Nov 6 00:26:12.781898 kubelet[2747]: I1106 00:26:12.781434 2747 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:26:12.781898 kubelet[2747]: I1106 00:26:12.781590 2747 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:26:12.783839 kubelet[2747]: E1106 00:26:12.783820 2747 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:26:12.783982 kubelet[2747]: I1106 00:26:12.783969 2747 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:26:12.793964 kubelet[2747]: I1106 00:26:12.793947 2747 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:26:12.793964 kubelet[2747]: I1106 00:26:12.793959 2747 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:26:12.794168 kubelet[2747]: I1106 00:26:12.794114 2747 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:26:12.798951 kubelet[2747]: I1106 00:26:12.798936 2747 policy_none.go:49] "None policy: Start" Nov 6 00:26:12.798951 kubelet[2747]: I1106 00:26:12.798949 2747 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:26:12.799026 kubelet[2747]: I1106 00:26:12.798958 2747 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:26:12.802995 kubelet[2747]: I1106 00:26:12.802937 2747 policy_none.go:47] "Start" Nov 6 00:26:12.805781 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:26:12.814538 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:26:12.818577 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:26:12.828374 kubelet[2747]: E1106 00:26:12.828307 2747 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:26:12.828446 kubelet[2747]: I1106 00:26:12.828435 2747 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:26:12.828611 kubelet[2747]: I1106 00:26:12.828541 2747 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:26:12.828940 kubelet[2747]: I1106 00:26:12.828814 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:26:12.829792 kubelet[2747]: E1106 00:26:12.829776 2747 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:26:12.829845 kubelet[2747]: E1106 00:26:12.829808 2747 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:12.834547 kubelet[2747]: I1106 00:26:12.834039 2747 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:26:12.836059 kubelet[2747]: I1106 00:26:12.836044 2747 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:26:12.836141 kubelet[2747]: I1106 00:26:12.836134 2747 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:26:12.836206 kubelet[2747]: I1106 00:26:12.836200 2747 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:26:12.836261 kubelet[2747]: E1106 00:26:12.836254 2747 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 6 00:26:12.837017 kubelet[2747]: E1106 00:26:12.837000 2747 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:26:12.929784 kubelet[2747]: I1106 00:26:12.929748 2747 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.930041 kubelet[2747]: E1106 00:26:12.930023 2747 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.946740 systemd[1]: Created slice kubepods-burstable-podee1c3e7fd5e82f7353e2afb49b5a7196.slice - libcontainer container kubepods-burstable-podee1c3e7fd5e82f7353e2afb49b5a7196.slice. Nov 6 00:26:12.955755 kubelet[2747]: E1106 00:26:12.955631 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.959174 systemd[1]: Created slice kubepods-burstable-pod0bc75bdd1704b192b262aa0ecbfae194.slice - libcontainer container kubepods-burstable-pod0bc75bdd1704b192b262aa0ecbfae194.slice. Nov 6 00:26:12.976783 kubelet[2747]: E1106 00:26:12.976769 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.978829 systemd[1]: Created slice kubepods-burstable-pode8b3637e4b1189326ac9075eba39074f.slice - libcontainer container kubepods-burstable-pode8b3637e4b1189326ac9075eba39074f.slice. Nov 6 00:26:12.979992 kubelet[2747]: I1106 00:26:12.979770 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee1c3e7fd5e82f7353e2afb49b5a7196-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-3bced53249\" (UID: \"ee1c3e7fd5e82f7353e2afb49b5a7196\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.979992 kubelet[2747]: I1106 00:26:12.979796 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee1c3e7fd5e82f7353e2afb49b5a7196-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-3bced53249\" (UID: \"ee1c3e7fd5e82f7353e2afb49b5a7196\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.979992 kubelet[2747]: I1106 00:26:12.979817 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bc75bdd1704b192b262aa0ecbfae194-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-3bced53249\" (UID: \"0bc75bdd1704b192b262aa0ecbfae194\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.979992 kubelet[2747]: I1106 00:26:12.979834 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bc75bdd1704b192b262aa0ecbfae194-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-3bced53249\" (UID: \"0bc75bdd1704b192b262aa0ecbfae194\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.979992 kubelet[2747]: I1106 00:26:12.979850 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8b3637e4b1189326ac9075eba39074f-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-3bced53249\" (UID: \"e8b3637e4b1189326ac9075eba39074f\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.980138 kubelet[2747]: I1106 00:26:12.979877 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee1c3e7fd5e82f7353e2afb49b5a7196-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-3bced53249\" (UID: \"ee1c3e7fd5e82f7353e2afb49b5a7196\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.980138 kubelet[2747]: I1106 00:26:12.979908 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bc75bdd1704b192b262aa0ecbfae194-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-3bced53249\" (UID: \"0bc75bdd1704b192b262aa0ecbfae194\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.980138 kubelet[2747]: I1106 00:26:12.979926 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0bc75bdd1704b192b262aa0ecbfae194-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-3bced53249\" (UID: \"0bc75bdd1704b192b262aa0ecbfae194\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.980138 kubelet[2747]: I1106 00:26:12.979942 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0bc75bdd1704b192b262aa0ecbfae194-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-3bced53249\" (UID: \"0bc75bdd1704b192b262aa0ecbfae194\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.980961 kubelet[2747]: E1106 00:26:12.980943 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:12.982066 kubelet[2747]: E1106 00:26:12.982041 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-3bced53249?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="400ms" Nov 6 00:26:13.131533 kubelet[2747]: I1106 00:26:13.131456 2747 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:13.133008 kubelet[2747]: E1106 00:26:13.132976 2747 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:13.261471 containerd[1686]: time="2025-11-06T00:26:13.261435262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-3bced53249,Uid:ee1c3e7fd5e82f7353e2afb49b5a7196,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:13.281828 containerd[1686]: time="2025-11-06T00:26:13.281661664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-3bced53249,Uid:0bc75bdd1704b192b262aa0ecbfae194,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:13.285226 containerd[1686]: time="2025-11-06T00:26:13.285183580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-3bced53249,Uid:e8b3637e4b1189326ac9075eba39074f,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:13.382916 kubelet[2747]: E1106 00:26:13.382833 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-3bced53249?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="800ms" Nov 6 00:26:13.534337 kubelet[2747]: I1106 00:26:13.534316 2747 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:13.534570 kubelet[2747]: E1106 00:26:13.534553 2747 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:13.629222 kubelet[2747]: E1106 00:26:13.629190 2747 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:26:13.804409 kubelet[2747]: E1106 00:26:13.804381 2747 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-3bced53249&limit=500&resourceVersion=0\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:26:14.003304 kubelet[2747]: E1106 00:26:14.003271 2747 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:26:14.184093 kubelet[2747]: E1106 00:26:14.184057 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-3bced53249?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="1.6s" Nov 6 00:26:14.305475 kubelet[2747]: E1106 00:26:14.305444 2747 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:26:14.336744 kubelet[2747]: I1106 00:26:14.336725 2747 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:14.337045 kubelet[2747]: E1106 00:26:14.337008 2747 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:14.859362 kubelet[2747]: E1106 00:26:14.859329 2747 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:26:15.453684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777487588.mount: Deactivated successfully. Nov 6 00:26:15.471077 containerd[1686]: time="2025-11-06T00:26:15.471041882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:26:15.542782 containerd[1686]: time="2025-11-06T00:26:15.542748232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 6 00:26:15.545036 containerd[1686]: time="2025-11-06T00:26:15.545016268Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:26:15.549751 containerd[1686]: time="2025-11-06T00:26:15.549724403Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:26:15.552301 containerd[1686]: time="2025-11-06T00:26:15.551821217Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:26:15.554338 containerd[1686]: time="2025-11-06T00:26:15.554316802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:26:15.556819 containerd[1686]: time="2025-11-06T00:26:15.556789439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:26:15.557273 containerd[1686]: time="2025-11-06T00:26:15.557251640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.290446641s" Nov 6 00:26:15.558998 containerd[1686]: time="2025-11-06T00:26:15.558843912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 00:26:15.561725 containerd[1686]: time="2025-11-06T00:26:15.561694087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.274371337s" Nov 6 00:26:15.572897 containerd[1686]: time="2025-11-06T00:26:15.572861121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.277560805s" Nov 6 00:26:15.602194 containerd[1686]: time="2025-11-06T00:26:15.602168162Z" level=info msg="connecting to shim c6284163080ee3f88c5506b0b4f9af2d6c3aae5c93b2e24125e9c46e08968718" address="unix:///run/containerd/s/0748629652aeb55486d484f9085f654668d68604878aacac2f2d4c941a02db0f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:15.628927 containerd[1686]: time="2025-11-06T00:26:15.628855045Z" level=info msg="connecting to shim c77641edfeb2708b6145ae7c3f17fd791402b0a1030cdf40c59ec05bc33d0ac9" address="unix:///run/containerd/s/7c739a4dd7f828863e76f562a6f30e7a84a5a89eb59379e7ab28762406b65b88" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:15.630643 containerd[1686]: time="2025-11-06T00:26:15.630588269Z" level=info msg="connecting to shim 4d52a0073521fcc3d3e846090af60065657493872ec5304fb8ba16e8a0452acd" address="unix:///run/containerd/s/bc5ad62e9d71b4c4981f97778f344090f99d43eae736b3669dc6d4b005466650" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:15.631140 systemd[1]: Started cri-containerd-c6284163080ee3f88c5506b0b4f9af2d6c3aae5c93b2e24125e9c46e08968718.scope - libcontainer container c6284163080ee3f88c5506b0b4f9af2d6c3aae5c93b2e24125e9c46e08968718. Nov 6 00:26:15.658028 systemd[1]: Started cri-containerd-4d52a0073521fcc3d3e846090af60065657493872ec5304fb8ba16e8a0452acd.scope - libcontainer container 4d52a0073521fcc3d3e846090af60065657493872ec5304fb8ba16e8a0452acd. Nov 6 00:26:15.661679 systemd[1]: Started cri-containerd-c77641edfeb2708b6145ae7c3f17fd791402b0a1030cdf40c59ec05bc33d0ac9.scope - libcontainer container c77641edfeb2708b6145ae7c3f17fd791402b0a1030cdf40c59ec05bc33d0ac9. Nov 6 00:26:15.687501 containerd[1686]: time="2025-11-06T00:26:15.687474687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-3bced53249,Uid:ee1c3e7fd5e82f7353e2afb49b5a7196,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6284163080ee3f88c5506b0b4f9af2d6c3aae5c93b2e24125e9c46e08968718\"" Nov 6 00:26:15.695049 containerd[1686]: time="2025-11-06T00:26:15.695029867Z" level=info msg="CreateContainer within sandbox \"c6284163080ee3f88c5506b0b4f9af2d6c3aae5c93b2e24125e9c46e08968718\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:26:15.714074 containerd[1686]: time="2025-11-06T00:26:15.714005971Z" level=info msg="Container cbc76306dd73e74c8ed84318dbc8753dda9c18debe3e37008a36235c735934f3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:15.716711 containerd[1686]: time="2025-11-06T00:26:15.716682763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-3bced53249,Uid:0bc75bdd1704b192b262aa0ecbfae194,Namespace:kube-system,Attempt:0,} returns sandbox id \"c77641edfeb2708b6145ae7c3f17fd791402b0a1030cdf40c59ec05bc33d0ac9\"" Nov 6 00:26:15.722837 containerd[1686]: time="2025-11-06T00:26:15.722819864Z" level=info msg="CreateContainer within sandbox \"c77641edfeb2708b6145ae7c3f17fd791402b0a1030cdf40c59ec05bc33d0ac9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:26:15.729046 kubelet[2747]: E1106 00:26:15.729021 2747 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-3bced53249&limit=500&resourceVersion=0\": dial tcp 10.200.8.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:26:15.730983 containerd[1686]: time="2025-11-06T00:26:15.730866337Z" level=info msg="CreateContainer within sandbox \"c6284163080ee3f88c5506b0b4f9af2d6c3aae5c93b2e24125e9c46e08968718\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cbc76306dd73e74c8ed84318dbc8753dda9c18debe3e37008a36235c735934f3\"" Nov 6 00:26:15.731711 containerd[1686]: time="2025-11-06T00:26:15.731693025Z" level=info msg="StartContainer for \"cbc76306dd73e74c8ed84318dbc8753dda9c18debe3e37008a36235c735934f3\"" Nov 6 00:26:15.733475 containerd[1686]: time="2025-11-06T00:26:15.733066971Z" level=info msg="connecting to shim cbc76306dd73e74c8ed84318dbc8753dda9c18debe3e37008a36235c735934f3" address="unix:///run/containerd/s/0748629652aeb55486d484f9085f654668d68604878aacac2f2d4c941a02db0f" protocol=ttrpc version=3 Nov 6 00:26:15.741083 containerd[1686]: time="2025-11-06T00:26:15.741064352Z" level=info msg="Container aa5a114f2119473e16fa1098a84203b2524b50bb69fdb34843073a56f042a82d: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:15.743708 containerd[1686]: time="2025-11-06T00:26:15.743686737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-3bced53249,Uid:e8b3637e4b1189326ac9075eba39074f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d52a0073521fcc3d3e846090af60065657493872ec5304fb8ba16e8a0452acd\"" Nov 6 00:26:15.750340 containerd[1686]: time="2025-11-06T00:26:15.750314140Z" level=info msg="CreateContainer within sandbox \"4d52a0073521fcc3d3e846090af60065657493872ec5304fb8ba16e8a0452acd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:26:15.752999 systemd[1]: Started cri-containerd-cbc76306dd73e74c8ed84318dbc8753dda9c18debe3e37008a36235c735934f3.scope - libcontainer container cbc76306dd73e74c8ed84318dbc8753dda9c18debe3e37008a36235c735934f3. Nov 6 00:26:15.770263 containerd[1686]: time="2025-11-06T00:26:15.770243787Z" level=info msg="Container ec9f69d5ed92ab9a463c236064ff20c35d85dc853db7d91ea1da92b80431d565: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:15.772019 containerd[1686]: time="2025-11-06T00:26:15.771999109Z" level=info msg="CreateContainer within sandbox \"c77641edfeb2708b6145ae7c3f17fd791402b0a1030cdf40c59ec05bc33d0ac9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa5a114f2119473e16fa1098a84203b2524b50bb69fdb34843073a56f042a82d\"" Nov 6 00:26:15.774845 containerd[1686]: time="2025-11-06T00:26:15.774822414Z" level=info msg="StartContainer for \"aa5a114f2119473e16fa1098a84203b2524b50bb69fdb34843073a56f042a82d\"" Nov 6 00:26:15.779209 containerd[1686]: time="2025-11-06T00:26:15.779178970Z" level=info msg="connecting to shim aa5a114f2119473e16fa1098a84203b2524b50bb69fdb34843073a56f042a82d" address="unix:///run/containerd/s/7c739a4dd7f828863e76f562a6f30e7a84a5a89eb59379e7ab28762406b65b88" protocol=ttrpc version=3 Nov 6 00:26:15.784755 kubelet[2747]: E1106 00:26:15.784731 2747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-3bced53249?timeout=10s\": dial tcp 10.200.8.20:6443: connect: connection refused" interval="3.2s" Nov 6 00:26:15.789444 containerd[1686]: time="2025-11-06T00:26:15.789421639Z" level=info msg="CreateContainer within sandbox \"4d52a0073521fcc3d3e846090af60065657493872ec5304fb8ba16e8a0452acd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ec9f69d5ed92ab9a463c236064ff20c35d85dc853db7d91ea1da92b80431d565\"" Nov 6 00:26:15.790024 containerd[1686]: time="2025-11-06T00:26:15.789861010Z" level=info msg="StartContainer for \"ec9f69d5ed92ab9a463c236064ff20c35d85dc853db7d91ea1da92b80431d565\"" Nov 6 00:26:15.792424 containerd[1686]: time="2025-11-06T00:26:15.792404250Z" level=info msg="connecting to shim ec9f69d5ed92ab9a463c236064ff20c35d85dc853db7d91ea1da92b80431d565" address="unix:///run/containerd/s/bc5ad62e9d71b4c4981f97778f344090f99d43eae736b3669dc6d4b005466650" protocol=ttrpc version=3 Nov 6 00:26:15.801097 systemd[1]: Started cri-containerd-aa5a114f2119473e16fa1098a84203b2524b50bb69fdb34843073a56f042a82d.scope - libcontainer container aa5a114f2119473e16fa1098a84203b2524b50bb69fdb34843073a56f042a82d. Nov 6 00:26:15.815834 containerd[1686]: time="2025-11-06T00:26:15.815813026Z" level=info msg="StartContainer for \"cbc76306dd73e74c8ed84318dbc8753dda9c18debe3e37008a36235c735934f3\" returns successfully" Nov 6 00:26:15.816220 systemd[1]: Started cri-containerd-ec9f69d5ed92ab9a463c236064ff20c35d85dc853db7d91ea1da92b80431d565.scope - libcontainer container ec9f69d5ed92ab9a463c236064ff20c35d85dc853db7d91ea1da92b80431d565. Nov 6 00:26:15.848980 kubelet[2747]: E1106 00:26:15.848956 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:15.896768 containerd[1686]: time="2025-11-06T00:26:15.896722661Z" level=info msg="StartContainer for \"aa5a114f2119473e16fa1098a84203b2524b50bb69fdb34843073a56f042a82d\" returns successfully" Nov 6 00:26:15.924965 containerd[1686]: time="2025-11-06T00:26:15.924306880Z" level=info msg="StartContainer for \"ec9f69d5ed92ab9a463c236064ff20c35d85dc853db7d91ea1da92b80431d565\" returns successfully" Nov 6 00:26:15.939635 kubelet[2747]: I1106 00:26:15.939615 2747 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:16.861182 kubelet[2747]: E1106 00:26:16.861150 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:16.864080 kubelet[2747]: E1106 00:26:16.864057 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:16.864936 kubelet[2747]: E1106 00:26:16.864916 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:17.867612 kubelet[2747]: E1106 00:26:17.867578 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:17.869081 kubelet[2747]: E1106 00:26:17.869067 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:17.869306 kubelet[2747]: E1106 00:26:17.869186 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:17.924924 kubelet[2747]: I1106 00:26:17.923128 2747 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:17.925125 kubelet[2747]: E1106 00:26:17.925019 2747 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459.1.0-n-3bced53249\": node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:17.967523 kubelet[2747]: E1106 00:26:17.967496 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:18.068097 kubelet[2747]: E1106 00:26:18.068071 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:18.168660 kubelet[2747]: E1106 00:26:18.168578 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:18.269618 kubelet[2747]: E1106 00:26:18.269592 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:18.370207 kubelet[2747]: E1106 00:26:18.370187 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:18.470708 kubelet[2747]: E1106 00:26:18.470682 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:18.571189 kubelet[2747]: E1106 00:26:18.571161 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:18.671796 kubelet[2747]: E1106 00:26:18.671760 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:18.772819 kubelet[2747]: E1106 00:26:18.772716 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:18.868231 kubelet[2747]: E1106 00:26:18.868201 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:18.868548 kubelet[2747]: E1106 00:26:18.868534 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:18.872846 kubelet[2747]: E1106 00:26:18.872823 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:18.973312 kubelet[2747]: E1106 00:26:18.973284 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.073705 kubelet[2747]: E1106 00:26:19.073513 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.174007 kubelet[2747]: E1106 00:26:19.173978 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.274360 kubelet[2747]: E1106 00:26:19.274333 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.374689 kubelet[2747]: E1106 00:26:19.374585 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.475162 kubelet[2747]: E1106 00:26:19.475125 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.575748 kubelet[2747]: E1106 00:26:19.575721 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.676253 kubelet[2747]: E1106 00:26:19.676159 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.737324 systemd[1]: Reload requested from client PID 3038 ('systemctl') (unit session-9.scope)... Nov 6 00:26:19.737337 systemd[1]: Reloading... Nov 6 00:26:19.776840 kubelet[2747]: E1106 00:26:19.776821 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.817900 zram_generator::config[3089]: No configuration found. Nov 6 00:26:19.870278 kubelet[2747]: E1106 00:26:19.870129 2747 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-3bced53249\" not found" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:19.877277 kubelet[2747]: E1106 00:26:19.877254 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.978004 kubelet[2747]: E1106 00:26:19.977780 2747 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-3bced53249\" not found" Nov 6 00:26:19.986525 systemd[1]: Reloading finished in 248 ms. Nov 6 00:26:20.011265 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:20.029305 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:26:20.029520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:20.029560 systemd[1]: kubelet.service: Consumed 728ms CPU time, 124.5M memory peak. Nov 6 00:26:20.031278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:26:20.528794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:26:20.536145 (kubelet)[3152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:26:20.575919 kubelet[3152]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:26:20.575919 kubelet[3152]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:26:20.575919 kubelet[3152]: I1106 00:26:20.575543 3152 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:26:20.579658 kubelet[3152]: I1106 00:26:20.579632 3152 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 6 00:26:20.579658 kubelet[3152]: I1106 00:26:20.579651 3152 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:26:20.579762 kubelet[3152]: I1106 00:26:20.579669 3152 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 6 00:26:20.579762 kubelet[3152]: I1106 00:26:20.579677 3152 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:26:20.579868 kubelet[3152]: I1106 00:26:20.579855 3152 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:26:20.581549 kubelet[3152]: I1106 00:26:20.580938 3152 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:26:20.584779 kubelet[3152]: I1106 00:26:20.584760 3152 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:26:20.588698 kubelet[3152]: I1106 00:26:20.588682 3152 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:26:20.592939 kubelet[3152]: I1106 00:26:20.591977 3152 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 6 00:26:20.592939 kubelet[3152]: I1106 00:26:20.592141 3152 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:26:20.592939 kubelet[3152]: I1106 00:26:20.592157 3152 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-3bced53249","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:26:20.592939 kubelet[3152]: I1106 00:26:20.592383 3152 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:26:20.593141 kubelet[3152]: I1106 00:26:20.592402 3152 container_manager_linux.go:306] "Creating device plugin manager" Nov 6 00:26:20.593141 kubelet[3152]: I1106 00:26:20.592424 3152 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 6 00:26:20.593500 kubelet[3152]: I1106 00:26:20.593484 3152 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:26:20.593661 kubelet[3152]: I1106 00:26:20.593649 3152 kubelet.go:475] "Attempting to sync node with API server" Nov 6 00:26:20.593689 kubelet[3152]: I1106 00:26:20.593668 3152 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:26:20.593710 kubelet[3152]: I1106 00:26:20.593696 3152 kubelet.go:387] "Adding apiserver pod source" Nov 6 00:26:20.593737 kubelet[3152]: I1106 00:26:20.593714 3152 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:26:20.596523 kubelet[3152]: I1106 00:26:20.596218 3152 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:26:20.596649 kubelet[3152]: I1106 00:26:20.596633 3152 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:26:20.596674 kubelet[3152]: I1106 00:26:20.596667 3152 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 6 00:26:20.601504 kubelet[3152]: I1106 00:26:20.601281 3152 server.go:1262] "Started kubelet" Nov 6 00:26:20.603394 kubelet[3152]: I1106 00:26:20.603342 3152 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:26:20.611846 kubelet[3152]: I1106 00:26:20.611825 3152 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 6 00:26:20.617196 kubelet[3152]: E1106 00:26:20.617181 3152 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:26:20.617500 kubelet[3152]: I1106 00:26:20.617463 3152 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:26:20.618210 kubelet[3152]: I1106 00:26:20.618200 3152 server.go:310] "Adding debug handlers to kubelet server" Nov 6 00:26:20.618528 kubelet[3152]: I1106 00:26:20.618513 3152 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 6 00:26:20.621256 kubelet[3152]: I1106 00:26:20.621233 3152 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:26:20.621338 kubelet[3152]: I1106 00:26:20.621330 3152 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 6 00:26:20.621487 kubelet[3152]: I1106 00:26:20.621479 3152 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:26:20.621702 kubelet[3152]: I1106 00:26:20.621693 3152 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:26:20.623806 kubelet[3152]: I1106 00:26:20.623787 3152 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 6 00:26:20.623897 kubelet[3152]: I1106 00:26:20.623870 3152 reconciler.go:29] "Reconciler: start to sync state" Nov 6 00:26:20.625477 kubelet[3152]: I1106 00:26:20.625460 3152 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 6 00:26:20.625477 kubelet[3152]: I1106 00:26:20.625480 3152 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 6 00:26:20.625561 kubelet[3152]: I1106 00:26:20.625492 3152 kubelet.go:2427] "Starting kubelet main sync loop" Nov 6 00:26:20.625561 kubelet[3152]: E1106 00:26:20.625532 3152 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:26:20.629949 kubelet[3152]: I1106 00:26:20.629351 3152 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:26:20.629949 kubelet[3152]: I1106 00:26:20.629435 3152 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:26:20.633246 kubelet[3152]: I1106 00:26:20.633229 3152 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:26:20.692158 kubelet[3152]: I1106 00:26:20.692137 3152 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:26:20.692421 kubelet[3152]: I1106 00:26:20.692400 3152 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:26:20.692584 kubelet[3152]: I1106 00:26:20.692576 3152 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:26:20.692865 kubelet[3152]: I1106 00:26:20.692822 3152 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:26:20.692865 kubelet[3152]: I1106 00:26:20.692833 3152 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:26:20.692865 kubelet[3152]: I1106 00:26:20.692848 3152 policy_none.go:49] "None policy: Start" Nov 6 00:26:20.693080 kubelet[3152]: I1106 00:26:20.692984 3152 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 6 00:26:20.693080 kubelet[3152]: I1106 00:26:20.693002 3152 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 6 00:26:20.693201 kubelet[3152]: I1106 00:26:20.693190 3152 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 6 00:26:20.693250 kubelet[3152]: I1106 00:26:20.693237 3152 policy_none.go:47] "Start" Nov 6 00:26:20.697915 kubelet[3152]: E1106 00:26:20.697556 3152 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:26:20.697915 kubelet[3152]: I1106 00:26:20.697676 3152 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:26:20.697915 kubelet[3152]: I1106 00:26:20.697685 3152 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:26:20.697915 kubelet[3152]: I1106 00:26:20.697834 3152 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:26:20.700271 kubelet[3152]: E1106 00:26:20.700258 3152 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:26:20.726823 kubelet[3152]: I1106 00:26:20.726809 3152 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.726976 kubelet[3152]: I1106 00:26:20.726809 3152 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.727052 kubelet[3152]: I1106 00:26:20.726874 3152 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.739263 kubelet[3152]: I1106 00:26:20.739237 3152 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:26:20.743225 kubelet[3152]: I1106 00:26:20.743163 3152 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:26:20.743799 kubelet[3152]: I1106 00:26:20.743209 3152 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:26:20.807577 kubelet[3152]: I1106 00:26:20.805650 3152 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.817630 kubelet[3152]: I1106 00:26:20.817608 3152 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.817694 kubelet[3152]: I1106 00:26:20.817661 3152 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.825307 kubelet[3152]: I1106 00:26:20.825283 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee1c3e7fd5e82f7353e2afb49b5a7196-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-3bced53249\" (UID: \"ee1c3e7fd5e82f7353e2afb49b5a7196\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.825307 kubelet[3152]: I1106 00:26:20.825313 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bc75bdd1704b192b262aa0ecbfae194-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-3bced53249\" (UID: \"0bc75bdd1704b192b262aa0ecbfae194\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.825505 kubelet[3152]: I1106 00:26:20.825329 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0bc75bdd1704b192b262aa0ecbfae194-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-3bced53249\" (UID: \"0bc75bdd1704b192b262aa0ecbfae194\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.825505 kubelet[3152]: I1106 00:26:20.825346 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0bc75bdd1704b192b262aa0ecbfae194-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-3bced53249\" (UID: \"0bc75bdd1704b192b262aa0ecbfae194\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.825505 kubelet[3152]: I1106 00:26:20.825361 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8b3637e4b1189326ac9075eba39074f-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-3bced53249\" (UID: \"e8b3637e4b1189326ac9075eba39074f\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.825505 kubelet[3152]: I1106 00:26:20.825376 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee1c3e7fd5e82f7353e2afb49b5a7196-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-3bced53249\" (UID: \"ee1c3e7fd5e82f7353e2afb49b5a7196\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.825505 kubelet[3152]: I1106 00:26:20.825396 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee1c3e7fd5e82f7353e2afb49b5a7196-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-3bced53249\" (UID: \"ee1c3e7fd5e82f7353e2afb49b5a7196\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.825647 kubelet[3152]: I1106 00:26:20.825412 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bc75bdd1704b192b262aa0ecbfae194-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-3bced53249\" (UID: \"0bc75bdd1704b192b262aa0ecbfae194\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:20.825647 kubelet[3152]: I1106 00:26:20.825444 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bc75bdd1704b192b262aa0ecbfae194-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-3bced53249\" (UID: \"0bc75bdd1704b192b262aa0ecbfae194\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" Nov 6 00:26:21.603312 kubelet[3152]: I1106 00:26:21.603283 3152 apiserver.go:52] "Watching apiserver" Nov 6 00:26:21.623973 kubelet[3152]: I1106 00:26:21.623944 3152 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 6 00:26:21.672821 kubelet[3152]: I1106 00:26:21.672022 3152 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-3bced53249" Nov 6 00:26:21.678847 kubelet[3152]: I1106 00:26:21.678823 3152 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 00:26:21.678942 kubelet[3152]: E1106 00:26:21.678873 3152 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-3bced53249\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-3bced53249" Nov 6 00:26:21.696289 kubelet[3152]: I1106 00:26:21.696215 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-n-3bced53249" podStartSLOduration=1.696203039 podStartE2EDuration="1.696203039s" podCreationTimestamp="2025-11-06 00:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:26:21.688706054 +0000 UTC m=+1.149933437" watchObservedRunningTime="2025-11-06 00:26:21.696203039 +0000 UTC m=+1.157430423" Nov 6 00:26:21.696466 kubelet[3152]: I1106 00:26:21.696304 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-n-3bced53249" podStartSLOduration=1.696299387 podStartE2EDuration="1.696299387s" podCreationTimestamp="2025-11-06 00:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:26:21.696201396 +0000 UTC m=+1.157428780" watchObservedRunningTime="2025-11-06 00:26:21.696299387 +0000 UTC m=+1.157526765" Nov 6 00:26:21.705717 kubelet[3152]: I1106 00:26:21.705622 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-3bced53249" podStartSLOduration=1.705613896 podStartE2EDuration="1.705613896s" podCreationTimestamp="2025-11-06 00:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:26:21.705266136 +0000 UTC m=+1.166493519" watchObservedRunningTime="2025-11-06 00:26:21.705613896 +0000 UTC m=+1.166841276" Nov 6 00:26:25.731809 kubelet[3152]: I1106 00:26:25.731778 3152 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:26:25.732245 containerd[1686]: time="2025-11-06T00:26:25.732075531Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:26:25.732547 kubelet[3152]: I1106 00:26:25.732247 3152 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:26:26.493831 systemd[1]: Created slice kubepods-besteffort-pod0ec94935_4192_47aa_938e_6018fffdfd10.slice - libcontainer container kubepods-besteffort-pod0ec94935_4192_47aa_938e_6018fffdfd10.slice. Nov 6 00:26:26.568175 kubelet[3152]: I1106 00:26:26.568143 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ec94935-4192-47aa-938e-6018fffdfd10-xtables-lock\") pod \"kube-proxy-9f7fl\" (UID: \"0ec94935-4192-47aa-938e-6018fffdfd10\") " pod="kube-system/kube-proxy-9f7fl" Nov 6 00:26:26.568175 kubelet[3152]: I1106 00:26:26.568175 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ec94935-4192-47aa-938e-6018fffdfd10-lib-modules\") pod \"kube-proxy-9f7fl\" (UID: \"0ec94935-4192-47aa-938e-6018fffdfd10\") " pod="kube-system/kube-proxy-9f7fl" Nov 6 00:26:26.568339 kubelet[3152]: I1106 00:26:26.568193 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9bvh\" (UniqueName: \"kubernetes.io/projected/0ec94935-4192-47aa-938e-6018fffdfd10-kube-api-access-d9bvh\") pod \"kube-proxy-9f7fl\" (UID: \"0ec94935-4192-47aa-938e-6018fffdfd10\") " pod="kube-system/kube-proxy-9f7fl" Nov 6 00:26:26.568339 kubelet[3152]: I1106 00:26:26.568212 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0ec94935-4192-47aa-938e-6018fffdfd10-kube-proxy\") pod \"kube-proxy-9f7fl\" (UID: \"0ec94935-4192-47aa-938e-6018fffdfd10\") " pod="kube-system/kube-proxy-9f7fl" Nov 6 00:26:26.672247 kubelet[3152]: E1106 00:26:26.672221 3152 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 00:26:26.672247 kubelet[3152]: E1106 00:26:26.672243 3152 projected.go:196] Error preparing data for projected volume kube-api-access-d9bvh for pod kube-system/kube-proxy-9f7fl: configmap "kube-root-ca.crt" not found Nov 6 00:26:26.672388 kubelet[3152]: E1106 00:26:26.672304 3152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ec94935-4192-47aa-938e-6018fffdfd10-kube-api-access-d9bvh podName:0ec94935-4192-47aa-938e-6018fffdfd10 nodeName:}" failed. No retries permitted until 2025-11-06 00:26:27.172284937 +0000 UTC m=+6.633512318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d9bvh" (UniqueName: "kubernetes.io/projected/0ec94935-4192-47aa-938e-6018fffdfd10-kube-api-access-d9bvh") pod "kube-proxy-9f7fl" (UID: "0ec94935-4192-47aa-938e-6018fffdfd10") : configmap "kube-root-ca.crt" not found Nov 6 00:26:26.932927 systemd[1]: Created slice kubepods-besteffort-pod0e855682_d264_4732_9833_122098bd26bc.slice - libcontainer container kubepods-besteffort-pod0e855682_d264_4732_9833_122098bd26bc.slice. Nov 6 00:26:26.971155 kubelet[3152]: I1106 00:26:26.971096 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0e855682-d264-4732-9833-122098bd26bc-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-wfm5w\" (UID: \"0e855682-d264-4732-9833-122098bd26bc\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-wfm5w" Nov 6 00:26:26.971155 kubelet[3152]: I1106 00:26:26.971130 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv55h\" (UniqueName: \"kubernetes.io/projected/0e855682-d264-4732-9833-122098bd26bc-kube-api-access-pv55h\") pod \"tigera-operator-65cdcdfd6d-wfm5w\" (UID: \"0e855682-d264-4732-9833-122098bd26bc\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-wfm5w" Nov 6 00:26:27.242280 containerd[1686]: time="2025-11-06T00:26:27.242176580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-wfm5w,Uid:0e855682-d264-4732-9833-122098bd26bc,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:26:27.284709 containerd[1686]: time="2025-11-06T00:26:27.284662070Z" level=info msg="connecting to shim 65388ef6d383abcb855bdc00e98e62e552164eb24e5b2d81febe4b28a0879569" address="unix:///run/containerd/s/ae121db1917920f44067fe54af86301bcffa13ac55550329bed0794b2471ed39" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:27.306025 systemd[1]: Started cri-containerd-65388ef6d383abcb855bdc00e98e62e552164eb24e5b2d81febe4b28a0879569.scope - libcontainer container 65388ef6d383abcb855bdc00e98e62e552164eb24e5b2d81febe4b28a0879569. Nov 6 00:26:27.340675 containerd[1686]: time="2025-11-06T00:26:27.340646291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-wfm5w,Uid:0e855682-d264-4732-9833-122098bd26bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"65388ef6d383abcb855bdc00e98e62e552164eb24e5b2d81febe4b28a0879569\"" Nov 6 00:26:27.342984 containerd[1686]: time="2025-11-06T00:26:27.342934938Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:26:27.408728 containerd[1686]: time="2025-11-06T00:26:27.408694195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9f7fl,Uid:0ec94935-4192-47aa-938e-6018fffdfd10,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:27.455653 containerd[1686]: time="2025-11-06T00:26:27.455259431Z" level=info msg="connecting to shim 100b1c6cad07c474196c2519868d48367ae6272cc782a57f722d964f979ee497" address="unix:///run/containerd/s/0d0b88d03664410e069a4b146a720764132396e020ae4e6cae993669e498f17d" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:27.475145 systemd[1]: Started cri-containerd-100b1c6cad07c474196c2519868d48367ae6272cc782a57f722d964f979ee497.scope - libcontainer container 100b1c6cad07c474196c2519868d48367ae6272cc782a57f722d964f979ee497. Nov 6 00:26:27.496271 containerd[1686]: time="2025-11-06T00:26:27.496211402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9f7fl,Uid:0ec94935-4192-47aa-938e-6018fffdfd10,Namespace:kube-system,Attempt:0,} returns sandbox id \"100b1c6cad07c474196c2519868d48367ae6272cc782a57f722d964f979ee497\"" Nov 6 00:26:27.503927 containerd[1686]: time="2025-11-06T00:26:27.503897543Z" level=info msg="CreateContainer within sandbox \"100b1c6cad07c474196c2519868d48367ae6272cc782a57f722d964f979ee497\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:26:27.530512 containerd[1686]: time="2025-11-06T00:26:27.530486065Z" level=info msg="Container b79b4346205333f1fce031cbf37756bdf48c5bd88fa56a8a205f125a98406c77: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:27.547721 containerd[1686]: time="2025-11-06T00:26:27.547696860Z" level=info msg="CreateContainer within sandbox \"100b1c6cad07c474196c2519868d48367ae6272cc782a57f722d964f979ee497\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b79b4346205333f1fce031cbf37756bdf48c5bd88fa56a8a205f125a98406c77\"" Nov 6 00:26:27.548217 containerd[1686]: time="2025-11-06T00:26:27.548195023Z" level=info msg="StartContainer for \"b79b4346205333f1fce031cbf37756bdf48c5bd88fa56a8a205f125a98406c77\"" Nov 6 00:26:27.549456 containerd[1686]: time="2025-11-06T00:26:27.549431914Z" level=info msg="connecting to shim b79b4346205333f1fce031cbf37756bdf48c5bd88fa56a8a205f125a98406c77" address="unix:///run/containerd/s/0d0b88d03664410e069a4b146a720764132396e020ae4e6cae993669e498f17d" protocol=ttrpc version=3 Nov 6 00:26:27.566016 systemd[1]: Started cri-containerd-b79b4346205333f1fce031cbf37756bdf48c5bd88fa56a8a205f125a98406c77.scope - libcontainer container b79b4346205333f1fce031cbf37756bdf48c5bd88fa56a8a205f125a98406c77. Nov 6 00:26:27.597707 containerd[1686]: time="2025-11-06T00:26:27.597684462Z" level=info msg="StartContainer for \"b79b4346205333f1fce031cbf37756bdf48c5bd88fa56a8a205f125a98406c77\" returns successfully" Nov 6 00:26:27.695829 kubelet[3152]: I1106 00:26:27.695717 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9f7fl" podStartSLOduration=1.695701613 podStartE2EDuration="1.695701613s" podCreationTimestamp="2025-11-06 00:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:26:27.695678721 +0000 UTC m=+7.156906107" watchObservedRunningTime="2025-11-06 00:26:27.695701613 +0000 UTC m=+7.156929001" Nov 6 00:26:29.452459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504476337.mount: Deactivated successfully. Nov 6 00:26:29.844773 containerd[1686]: time="2025-11-06T00:26:29.844685632Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:29.847562 containerd[1686]: time="2025-11-06T00:26:29.847532431Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:26:29.850665 containerd[1686]: time="2025-11-06T00:26:29.849936067Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:29.852983 containerd[1686]: time="2025-11-06T00:26:29.852962397Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:29.853385 containerd[1686]: time="2025-11-06T00:26:29.853366213Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.510401686s" Nov 6 00:26:29.853445 containerd[1686]: time="2025-11-06T00:26:29.853434939Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:26:29.858994 containerd[1686]: time="2025-11-06T00:26:29.858968651Z" level=info msg="CreateContainer within sandbox \"65388ef6d383abcb855bdc00e98e62e552164eb24e5b2d81febe4b28a0879569\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:26:29.877913 containerd[1686]: time="2025-11-06T00:26:29.875841777Z" level=info msg="Container 0dafab569e418dd3561d924189863d9eab899b6a7db6ef635e7097f9aacc2f39: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:29.891419 containerd[1686]: time="2025-11-06T00:26:29.891396160Z" level=info msg="CreateContainer within sandbox \"65388ef6d383abcb855bdc00e98e62e552164eb24e5b2d81febe4b28a0879569\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0dafab569e418dd3561d924189863d9eab899b6a7db6ef635e7097f9aacc2f39\"" Nov 6 00:26:29.891930 containerd[1686]: time="2025-11-06T00:26:29.891788795Z" level=info msg="StartContainer for \"0dafab569e418dd3561d924189863d9eab899b6a7db6ef635e7097f9aacc2f39\"" Nov 6 00:26:29.892612 containerd[1686]: time="2025-11-06T00:26:29.892589938Z" level=info msg="connecting to shim 0dafab569e418dd3561d924189863d9eab899b6a7db6ef635e7097f9aacc2f39" address="unix:///run/containerd/s/ae121db1917920f44067fe54af86301bcffa13ac55550329bed0794b2471ed39" protocol=ttrpc version=3 Nov 6 00:26:29.919007 systemd[1]: Started cri-containerd-0dafab569e418dd3561d924189863d9eab899b6a7db6ef635e7097f9aacc2f39.scope - libcontainer container 0dafab569e418dd3561d924189863d9eab899b6a7db6ef635e7097f9aacc2f39. Nov 6 00:26:29.945097 containerd[1686]: time="2025-11-06T00:26:29.945074740Z" level=info msg="StartContainer for \"0dafab569e418dd3561d924189863d9eab899b6a7db6ef635e7097f9aacc2f39\" returns successfully" Nov 6 00:26:30.706290 kubelet[3152]: I1106 00:26:30.705759 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-wfm5w" podStartSLOduration=2.193462952 podStartE2EDuration="4.7057439s" podCreationTimestamp="2025-11-06 00:26:26 +0000 UTC" firstStartedPulling="2025-11-06 00:26:27.341710273 +0000 UTC m=+6.802937644" lastFinishedPulling="2025-11-06 00:26:29.853991223 +0000 UTC m=+9.315218592" observedRunningTime="2025-11-06 00:26:30.705405014 +0000 UTC m=+10.166632399" watchObservedRunningTime="2025-11-06 00:26:30.7057439 +0000 UTC m=+10.166971288" Nov 6 00:26:35.410559 sudo[2158]: pam_unix(sudo:session): session closed for user root Nov 6 00:26:35.545905 sshd[2157]: Connection closed by 10.200.16.10 port 42620 Nov 6 00:26:35.550050 sshd-session[2154]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:35.553605 systemd[1]: sshd@6-10.200.8.20:22-10.200.16.10:42620.service: Deactivated successfully. Nov 6 00:26:35.559439 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:26:35.559950 systemd[1]: session-9.scope: Consumed 4.100s CPU time, 232.9M memory peak. Nov 6 00:26:35.563476 systemd-logind[1674]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:26:35.565647 systemd-logind[1674]: Removed session 9. Nov 6 00:26:39.722645 systemd[1]: Created slice kubepods-besteffort-podcff9eab1_0e04_4105_8c16_bffd21c343b8.slice - libcontainer container kubepods-besteffort-podcff9eab1_0e04_4105_8c16_bffd21c343b8.slice. Nov 6 00:26:39.755839 kubelet[3152]: I1106 00:26:39.755791 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nftld\" (UniqueName: \"kubernetes.io/projected/cff9eab1-0e04-4105-8c16-bffd21c343b8-kube-api-access-nftld\") pod \"calico-typha-785bc4758c-sczk2\" (UID: \"cff9eab1-0e04-4105-8c16-bffd21c343b8\") " pod="calico-system/calico-typha-785bc4758c-sczk2" Nov 6 00:26:39.755839 kubelet[3152]: I1106 00:26:39.755842 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cff9eab1-0e04-4105-8c16-bffd21c343b8-tigera-ca-bundle\") pod \"calico-typha-785bc4758c-sczk2\" (UID: \"cff9eab1-0e04-4105-8c16-bffd21c343b8\") " pod="calico-system/calico-typha-785bc4758c-sczk2" Nov 6 00:26:39.755839 kubelet[3152]: I1106 00:26:39.755857 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cff9eab1-0e04-4105-8c16-bffd21c343b8-typha-certs\") pod \"calico-typha-785bc4758c-sczk2\" (UID: \"cff9eab1-0e04-4105-8c16-bffd21c343b8\") " pod="calico-system/calico-typha-785bc4758c-sczk2" Nov 6 00:26:39.950314 systemd[1]: Created slice kubepods-besteffort-pod147142d6_d109_467a_aa0b_b5e7c5781ece.slice - libcontainer container kubepods-besteffort-pod147142d6_d109_467a_aa0b_b5e7c5781ece.slice. Nov 6 00:26:39.956370 kubelet[3152]: I1106 00:26:39.956346 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/147142d6-d109-467a-aa0b-b5e7c5781ece-lib-modules\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956471 kubelet[3152]: I1106 00:26:39.956376 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/147142d6-d109-467a-aa0b-b5e7c5781ece-xtables-lock\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956471 kubelet[3152]: I1106 00:26:39.956401 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/147142d6-d109-467a-aa0b-b5e7c5781ece-flexvol-driver-host\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956471 kubelet[3152]: I1106 00:26:39.956422 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/147142d6-d109-467a-aa0b-b5e7c5781ece-node-certs\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956471 kubelet[3152]: I1106 00:26:39.956437 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/147142d6-d109-467a-aa0b-b5e7c5781ece-cni-log-dir\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956471 kubelet[3152]: I1106 00:26:39.956453 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gst66\" (UniqueName: \"kubernetes.io/projected/147142d6-d109-467a-aa0b-b5e7c5781ece-kube-api-access-gst66\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956582 kubelet[3152]: I1106 00:26:39.956476 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/147142d6-d109-467a-aa0b-b5e7c5781ece-cni-net-dir\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956582 kubelet[3152]: I1106 00:26:39.956493 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/147142d6-d109-467a-aa0b-b5e7c5781ece-policysync\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956582 kubelet[3152]: I1106 00:26:39.956510 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/147142d6-d109-467a-aa0b-b5e7c5781ece-tigera-ca-bundle\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956582 kubelet[3152]: I1106 00:26:39.956524 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/147142d6-d109-467a-aa0b-b5e7c5781ece-var-lib-calico\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956582 kubelet[3152]: I1106 00:26:39.956539 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/147142d6-d109-467a-aa0b-b5e7c5781ece-var-run-calico\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:39.956690 kubelet[3152]: I1106 00:26:39.956563 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/147142d6-d109-467a-aa0b-b5e7c5781ece-cni-bin-dir\") pod \"calico-node-hrgz8\" (UID: \"147142d6-d109-467a-aa0b-b5e7c5781ece\") " pod="calico-system/calico-node-hrgz8" Nov 6 00:26:40.030923 containerd[1686]: time="2025-11-06T00:26:40.030435355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-785bc4758c-sczk2,Uid:cff9eab1-0e04-4105-8c16-bffd21c343b8,Namespace:calico-system,Attempt:0,}" Nov 6 00:26:40.062927 kubelet[3152]: E1106 00:26:40.062635 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.062927 kubelet[3152]: W1106 00:26:40.062654 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.062927 kubelet[3152]: E1106 00:26:40.062671 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.069175 kubelet[3152]: E1106 00:26:40.069155 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.069175 kubelet[3152]: W1106 00:26:40.069172 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.069278 kubelet[3152]: E1106 00:26:40.069185 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.083855 containerd[1686]: time="2025-11-06T00:26:40.083789107Z" level=info msg="connecting to shim 72ed3d3064c9168983448e2e312b7b80c63003dea7bf89ea24838ae4a154b92a" address="unix:///run/containerd/s/01d9cfd18000a48b7f216c360bee9784c7399cb1d77aefc1b6373e96853cf289" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:40.111048 systemd[1]: Started cri-containerd-72ed3d3064c9168983448e2e312b7b80c63003dea7bf89ea24838ae4a154b92a.scope - libcontainer container 72ed3d3064c9168983448e2e312b7b80c63003dea7bf89ea24838ae4a154b92a. Nov 6 00:26:40.144138 kubelet[3152]: E1106 00:26:40.143654 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:26:40.145116 kubelet[3152]: E1106 00:26:40.145055 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.145116 kubelet[3152]: W1106 00:26:40.145068 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.145116 kubelet[3152]: E1106 00:26:40.145082 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.145563 kubelet[3152]: E1106 00:26:40.145335 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.146157 kubelet[3152]: W1106 00:26:40.146055 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.146157 kubelet[3152]: E1106 00:26:40.146079 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.147801 kubelet[3152]: E1106 00:26:40.147343 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.147801 kubelet[3152]: W1106 00:26:40.147404 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.147801 kubelet[3152]: E1106 00:26:40.147420 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.148237 kubelet[3152]: E1106 00:26:40.148119 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.148520 kubelet[3152]: W1106 00:26:40.148416 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.148520 kubelet[3152]: E1106 00:26:40.148433 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.148639 kubelet[3152]: E1106 00:26:40.148632 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.148677 kubelet[3152]: W1106 00:26:40.148670 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.148714 kubelet[3152]: E1106 00:26:40.148708 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.148895 kubelet[3152]: E1106 00:26:40.148852 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.148895 kubelet[3152]: W1106 00:26:40.148860 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.148895 kubelet[3152]: E1106 00:26:40.148867 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.149130 kubelet[3152]: E1106 00:26:40.149082 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.149130 kubelet[3152]: W1106 00:26:40.149091 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.149130 kubelet[3152]: E1106 00:26:40.149100 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.150954 kubelet[3152]: E1106 00:26:40.150939 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.151100 kubelet[3152]: W1106 00:26:40.151034 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.151100 kubelet[3152]: E1106 00:26:40.151052 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.151308 kubelet[3152]: E1106 00:26:40.151266 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.151308 kubelet[3152]: W1106 00:26:40.151274 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.151308 kubelet[3152]: E1106 00:26:40.151283 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.151504 kubelet[3152]: E1106 00:26:40.151466 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.151504 kubelet[3152]: W1106 00:26:40.151474 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.151504 kubelet[3152]: E1106 00:26:40.151481 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.151697 kubelet[3152]: E1106 00:26:40.151659 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.151697 kubelet[3152]: W1106 00:26:40.151665 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.151697 kubelet[3152]: E1106 00:26:40.151672 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.151898 kubelet[3152]: E1106 00:26:40.151849 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.151898 kubelet[3152]: W1106 00:26:40.151856 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.151898 kubelet[3152]: E1106 00:26:40.151862 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.152108 kubelet[3152]: E1106 00:26:40.152066 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.152108 kubelet[3152]: W1106 00:26:40.152075 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.152108 kubelet[3152]: E1106 00:26:40.152084 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.152321 kubelet[3152]: E1106 00:26:40.152278 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.152321 kubelet[3152]: W1106 00:26:40.152287 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.152321 kubelet[3152]: E1106 00:26:40.152296 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.152536 kubelet[3152]: E1106 00:26:40.152490 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.152536 kubelet[3152]: W1106 00:26:40.152498 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.152536 kubelet[3152]: E1106 00:26:40.152507 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.152916 kubelet[3152]: E1106 00:26:40.152825 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.152916 kubelet[3152]: W1106 00:26:40.152835 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.152916 kubelet[3152]: E1106 00:26:40.152845 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.153151 kubelet[3152]: E1106 00:26:40.153109 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.153151 kubelet[3152]: W1106 00:26:40.153118 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.153151 kubelet[3152]: E1106 00:26:40.153127 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.153347 kubelet[3152]: E1106 00:26:40.153311 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.153347 kubelet[3152]: W1106 00:26:40.153317 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.153347 kubelet[3152]: E1106 00:26:40.153324 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.153550 kubelet[3152]: E1106 00:26:40.153508 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.153550 kubelet[3152]: W1106 00:26:40.153515 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.153550 kubelet[3152]: E1106 00:26:40.153523 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.155060 kubelet[3152]: E1106 00:26:40.154977 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.155060 kubelet[3152]: W1106 00:26:40.154991 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.155060 kubelet[3152]: E1106 00:26:40.155004 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.157294 kubelet[3152]: E1106 00:26:40.157281 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.157457 kubelet[3152]: W1106 00:26:40.157370 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.157457 kubelet[3152]: E1106 00:26:40.157387 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.157457 kubelet[3152]: I1106 00:26:40.157410 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c757a1d-95f3-4cbd-9adf-b65065b2eb8c-kubelet-dir\") pod \"csi-node-driver-4j9vt\" (UID: \"9c757a1d-95f3-4cbd-9adf-b65065b2eb8c\") " pod="calico-system/csi-node-driver-4j9vt" Nov 6 00:26:40.157727 kubelet[3152]: E1106 00:26:40.157690 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.157727 kubelet[3152]: W1106 00:26:40.157703 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.157727 kubelet[3152]: E1106 00:26:40.157716 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.157909 kubelet[3152]: I1106 00:26:40.157839 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c757a1d-95f3-4cbd-9adf-b65065b2eb8c-registration-dir\") pod \"csi-node-driver-4j9vt\" (UID: \"9c757a1d-95f3-4cbd-9adf-b65065b2eb8c\") " pod="calico-system/csi-node-driver-4j9vt" Nov 6 00:26:40.158099 kubelet[3152]: E1106 00:26:40.158068 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.158099 kubelet[3152]: W1106 00:26:40.158078 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.158099 kubelet[3152]: E1106 00:26:40.158088 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.158249 kubelet[3152]: I1106 00:26:40.158195 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c757a1d-95f3-4cbd-9adf-b65065b2eb8c-socket-dir\") pod \"csi-node-driver-4j9vt\" (UID: \"9c757a1d-95f3-4cbd-9adf-b65065b2eb8c\") " pod="calico-system/csi-node-driver-4j9vt" Nov 6 00:26:40.158400 kubelet[3152]: E1106 00:26:40.158393 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.158445 kubelet[3152]: W1106 00:26:40.158430 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.158445 kubelet[3152]: E1106 00:26:40.158438 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.158530 kubelet[3152]: I1106 00:26:40.158499 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsgkk\" (UniqueName: \"kubernetes.io/projected/9c757a1d-95f3-4cbd-9adf-b65065b2eb8c-kube-api-access-xsgkk\") pod \"csi-node-driver-4j9vt\" (UID: \"9c757a1d-95f3-4cbd-9adf-b65065b2eb8c\") " pod="calico-system/csi-node-driver-4j9vt" Nov 6 00:26:40.158660 kubelet[3152]: E1106 00:26:40.158654 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.158697 kubelet[3152]: W1106 00:26:40.158685 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.158697 kubelet[3152]: E1106 00:26:40.158691 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.158752 kubelet[3152]: I1106 00:26:40.158739 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c757a1d-95f3-4cbd-9adf-b65065b2eb8c-varrun\") pod \"csi-node-driver-4j9vt\" (UID: \"9c757a1d-95f3-4cbd-9adf-b65065b2eb8c\") " pod="calico-system/csi-node-driver-4j9vt" Nov 6 00:26:40.158934 kubelet[3152]: E1106 00:26:40.158914 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.158934 kubelet[3152]: W1106 00:26:40.158921 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.158934 kubelet[3152]: E1106 00:26:40.158927 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.159127 kubelet[3152]: E1106 00:26:40.159111 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.159127 kubelet[3152]: W1106 00:26:40.159116 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.159127 kubelet[3152]: E1106 00:26:40.159121 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.159285 kubelet[3152]: E1106 00:26:40.159280 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.159919 kubelet[3152]: W1106 00:26:40.159908 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.159962 kubelet[3152]: E1106 00:26:40.159922 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.160043 kubelet[3152]: E1106 00:26:40.160036 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.160066 kubelet[3152]: W1106 00:26:40.160043 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.160066 kubelet[3152]: E1106 00:26:40.160050 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.160142 kubelet[3152]: E1106 00:26:40.160136 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.160175 kubelet[3152]: W1106 00:26:40.160143 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.160175 kubelet[3152]: E1106 00:26:40.160149 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.160243 kubelet[3152]: E1106 00:26:40.160236 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.160283 kubelet[3152]: W1106 00:26:40.160243 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.160283 kubelet[3152]: E1106 00:26:40.160249 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.160335 kubelet[3152]: E1106 00:26:40.160326 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.160335 kubelet[3152]: W1106 00:26:40.160330 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.160399 kubelet[3152]: E1106 00:26:40.160336 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.160448 kubelet[3152]: E1106 00:26:40.160422 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.160448 kubelet[3152]: W1106 00:26:40.160427 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.160448 kubelet[3152]: E1106 00:26:40.160432 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.160555 kubelet[3152]: E1106 00:26:40.160514 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.160555 kubelet[3152]: W1106 00:26:40.160518 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.160555 kubelet[3152]: E1106 00:26:40.160524 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.160635 kubelet[3152]: E1106 00:26:40.160602 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.160635 kubelet[3152]: W1106 00:26:40.160607 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.160635 kubelet[3152]: E1106 00:26:40.160612 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.176482 containerd[1686]: time="2025-11-06T00:26:40.176457241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-785bc4758c-sczk2,Uid:cff9eab1-0e04-4105-8c16-bffd21c343b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"72ed3d3064c9168983448e2e312b7b80c63003dea7bf89ea24838ae4a154b92a\"" Nov 6 00:26:40.178161 containerd[1686]: time="2025-11-06T00:26:40.177608913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:26:40.258986 containerd[1686]: time="2025-11-06T00:26:40.258946669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hrgz8,Uid:147142d6-d109-467a-aa0b-b5e7c5781ece,Namespace:calico-system,Attempt:0,}" Nov 6 00:26:40.259140 kubelet[3152]: E1106 00:26:40.259130 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.259234 kubelet[3152]: W1106 00:26:40.259184 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.259234 kubelet[3152]: E1106 00:26:40.259206 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.259456 kubelet[3152]: E1106 00:26:40.259443 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.259456 kubelet[3152]: W1106 00:26:40.259453 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.259538 kubelet[3152]: E1106 00:26:40.259463 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.259605 kubelet[3152]: E1106 00:26:40.259595 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.259605 kubelet[3152]: W1106 00:26:40.259603 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.259674 kubelet[3152]: E1106 00:26:40.259611 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.259762 kubelet[3152]: E1106 00:26:40.259752 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.259762 kubelet[3152]: W1106 00:26:40.259760 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.259830 kubelet[3152]: E1106 00:26:40.259769 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.259896 kubelet[3152]: E1106 00:26:40.259873 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.259924 kubelet[3152]: W1106 00:26:40.259896 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.259924 kubelet[3152]: E1106 00:26:40.259903 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.260017 kubelet[3152]: E1106 00:26:40.260008 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.260017 kubelet[3152]: W1106 00:26:40.260016 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.260071 kubelet[3152]: E1106 00:26:40.260022 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.260189 kubelet[3152]: E1106 00:26:40.260180 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.260189 kubelet[3152]: W1106 00:26:40.260187 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.260256 kubelet[3152]: E1106 00:26:40.260194 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.260397 kubelet[3152]: E1106 00:26:40.260372 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.260397 kubelet[3152]: W1106 00:26:40.260381 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.260454 kubelet[3152]: E1106 00:26:40.260441 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.260557 kubelet[3152]: E1106 00:26:40.260550 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.260582 kubelet[3152]: W1106 00:26:40.260557 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.260582 kubelet[3152]: E1106 00:26:40.260564 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.260715 kubelet[3152]: E1106 00:26:40.260686 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.260745 kubelet[3152]: W1106 00:26:40.260719 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.260745 kubelet[3152]: E1106 00:26:40.260725 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.260936 kubelet[3152]: E1106 00:26:40.260909 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.260936 kubelet[3152]: W1106 00:26:40.260931 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.261008 kubelet[3152]: E1106 00:26:40.260938 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.261108 kubelet[3152]: E1106 00:26:40.261101 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.261141 kubelet[3152]: W1106 00:26:40.261108 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.261141 kubelet[3152]: E1106 00:26:40.261115 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.261243 kubelet[3152]: E1106 00:26:40.261233 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.261243 kubelet[3152]: W1106 00:26:40.261240 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.261291 kubelet[3152]: E1106 00:26:40.261246 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.261394 kubelet[3152]: E1106 00:26:40.261388 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.261430 kubelet[3152]: W1106 00:26:40.261394 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.261430 kubelet[3152]: E1106 00:26:40.261400 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.261676 kubelet[3152]: E1106 00:26:40.261622 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.261676 kubelet[3152]: W1106 00:26:40.261635 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.261676 kubelet[3152]: E1106 00:26:40.261643 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.261793 kubelet[3152]: E1106 00:26:40.261783 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.261820 kubelet[3152]: W1106 00:26:40.261793 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.261820 kubelet[3152]: E1106 00:26:40.261799 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.261949 kubelet[3152]: E1106 00:26:40.261935 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.261949 kubelet[3152]: W1106 00:26:40.261943 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.261949 kubelet[3152]: E1106 00:26:40.261949 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.262081 kubelet[3152]: E1106 00:26:40.262066 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.262081 kubelet[3152]: W1106 00:26:40.262071 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.262081 kubelet[3152]: E1106 00:26:40.262077 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.262301 kubelet[3152]: E1106 00:26:40.262291 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.262301 kubelet[3152]: W1106 00:26:40.262298 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.262373 kubelet[3152]: E1106 00:26:40.262305 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.262460 kubelet[3152]: E1106 00:26:40.262450 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.262460 kubelet[3152]: W1106 00:26:40.262456 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.262521 kubelet[3152]: E1106 00:26:40.262464 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.262607 kubelet[3152]: E1106 00:26:40.262596 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.262607 kubelet[3152]: W1106 00:26:40.262605 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.262689 kubelet[3152]: E1106 00:26:40.262612 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.262731 kubelet[3152]: E1106 00:26:40.262722 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.262731 kubelet[3152]: W1106 00:26:40.262729 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.262731 kubelet[3152]: E1106 00:26:40.262735 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.263003 kubelet[3152]: E1106 00:26:40.262992 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.263003 kubelet[3152]: W1106 00:26:40.263003 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.263176 kubelet[3152]: E1106 00:26:40.263011 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.263176 kubelet[3152]: E1106 00:26:40.263163 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.263176 kubelet[3152]: W1106 00:26:40.263169 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.263328 kubelet[3152]: E1106 00:26:40.263177 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.263328 kubelet[3152]: E1106 00:26:40.263316 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.263328 kubelet[3152]: W1106 00:26:40.263321 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.263415 kubelet[3152]: E1106 00:26:40.263339 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.268673 kubelet[3152]: E1106 00:26:40.268651 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:40.268673 kubelet[3152]: W1106 00:26:40.268668 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:40.268756 kubelet[3152]: E1106 00:26:40.268678 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:40.299388 containerd[1686]: time="2025-11-06T00:26:40.299246103Z" level=info msg="connecting to shim 8b1adec105a69c348bf096f85a06931f873d391fa1fbfe13007cc6f67bff8e5c" address="unix:///run/containerd/s/c54408927a53018b16022b3e57d59c5d7ef4fa776ae050523e4b2e4bf17ee413" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:40.320043 systemd[1]: Started cri-containerd-8b1adec105a69c348bf096f85a06931f873d391fa1fbfe13007cc6f67bff8e5c.scope - libcontainer container 8b1adec105a69c348bf096f85a06931f873d391fa1fbfe13007cc6f67bff8e5c. Nov 6 00:26:40.343865 containerd[1686]: time="2025-11-06T00:26:40.343798512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hrgz8,Uid:147142d6-d109-467a-aa0b-b5e7c5781ece,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b1adec105a69c348bf096f85a06931f873d391fa1fbfe13007cc6f67bff8e5c\"" Nov 6 00:26:41.484554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3001952070.mount: Deactivated successfully. Nov 6 00:26:41.626485 kubelet[3152]: E1106 00:26:41.626430 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:26:42.399870 containerd[1686]: time="2025-11-06T00:26:42.399829903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:42.402063 containerd[1686]: time="2025-11-06T00:26:42.401991184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:26:42.404442 containerd[1686]: time="2025-11-06T00:26:42.404402161Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:42.407584 containerd[1686]: time="2025-11-06T00:26:42.407545103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:42.408063 containerd[1686]: time="2025-11-06T00:26:42.407824692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.229536455s" Nov 6 00:26:42.408063 containerd[1686]: time="2025-11-06T00:26:42.407850169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:26:42.409155 containerd[1686]: time="2025-11-06T00:26:42.409130213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:26:42.423925 containerd[1686]: time="2025-11-06T00:26:42.423902219Z" level=info msg="CreateContainer within sandbox \"72ed3d3064c9168983448e2e312b7b80c63003dea7bf89ea24838ae4a154b92a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:26:42.439970 containerd[1686]: time="2025-11-06T00:26:42.439926927Z" level=info msg="Container c3709ad125c5b4eabcb37d70fde03349964f25920f4278f4441ab05d3667e40f: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:42.444319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233196772.mount: Deactivated successfully. Nov 6 00:26:42.456889 containerd[1686]: time="2025-11-06T00:26:42.456865825Z" level=info msg="CreateContainer within sandbox \"72ed3d3064c9168983448e2e312b7b80c63003dea7bf89ea24838ae4a154b92a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c3709ad125c5b4eabcb37d70fde03349964f25920f4278f4441ab05d3667e40f\"" Nov 6 00:26:42.457345 containerd[1686]: time="2025-11-06T00:26:42.457211699Z" level=info msg="StartContainer for \"c3709ad125c5b4eabcb37d70fde03349964f25920f4278f4441ab05d3667e40f\"" Nov 6 00:26:42.458642 containerd[1686]: time="2025-11-06T00:26:42.458620040Z" level=info msg="connecting to shim c3709ad125c5b4eabcb37d70fde03349964f25920f4278f4441ab05d3667e40f" address="unix:///run/containerd/s/01d9cfd18000a48b7f216c360bee9784c7399cb1d77aefc1b6373e96853cf289" protocol=ttrpc version=3 Nov 6 00:26:42.481012 systemd[1]: Started cri-containerd-c3709ad125c5b4eabcb37d70fde03349964f25920f4278f4441ab05d3667e40f.scope - libcontainer container c3709ad125c5b4eabcb37d70fde03349964f25920f4278f4441ab05d3667e40f. Nov 6 00:26:42.527117 containerd[1686]: time="2025-11-06T00:26:42.527098764Z" level=info msg="StartContainer for \"c3709ad125c5b4eabcb37d70fde03349964f25920f4278f4441ab05d3667e40f\" returns successfully" Nov 6 00:26:42.765072 kubelet[3152]: I1106 00:26:42.764808 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-785bc4758c-sczk2" podStartSLOduration=1.533508527 podStartE2EDuration="3.764793501s" podCreationTimestamp="2025-11-06 00:26:39 +0000 UTC" firstStartedPulling="2025-11-06 00:26:40.177265361 +0000 UTC m=+19.638492732" lastFinishedPulling="2025-11-06 00:26:42.408550337 +0000 UTC m=+21.869777706" observedRunningTime="2025-11-06 00:26:42.764559594 +0000 UTC m=+22.225786977" watchObservedRunningTime="2025-11-06 00:26:42.764793501 +0000 UTC m=+22.226020883" Nov 6 00:26:42.773009 kubelet[3152]: E1106 00:26:42.772925 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.773009 kubelet[3152]: W1106 00:26:42.772945 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.773009 kubelet[3152]: E1106 00:26:42.772962 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.773312 kubelet[3152]: E1106 00:26:42.773269 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.773312 kubelet[3152]: W1106 00:26:42.773276 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.773312 kubelet[3152]: E1106 00:26:42.773286 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.773506 kubelet[3152]: E1106 00:26:42.773467 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.773506 kubelet[3152]: W1106 00:26:42.773473 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.773506 kubelet[3152]: E1106 00:26:42.773480 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.773724 kubelet[3152]: E1106 00:26:42.773683 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.773724 kubelet[3152]: W1106 00:26:42.773689 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.773724 kubelet[3152]: E1106 00:26:42.773695 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.773890 kubelet[3152]: E1106 00:26:42.773858 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.773890 kubelet[3152]: W1106 00:26:42.773863 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.773890 kubelet[3152]: E1106 00:26:42.773870 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.774071 kubelet[3152]: E1106 00:26:42.774040 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.774071 kubelet[3152]: W1106 00:26:42.774047 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.774071 kubelet[3152]: E1106 00:26:42.774053 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.774229 kubelet[3152]: E1106 00:26:42.774199 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.774229 kubelet[3152]: W1106 00:26:42.774204 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.774229 kubelet[3152]: E1106 00:26:42.774210 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.774392 kubelet[3152]: E1106 00:26:42.774356 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.774392 kubelet[3152]: W1106 00:26:42.774361 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.774392 kubelet[3152]: E1106 00:26:42.774366 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.774545 kubelet[3152]: E1106 00:26:42.774516 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.774545 kubelet[3152]: W1106 00:26:42.774521 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.774545 kubelet[3152]: E1106 00:26:42.774527 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.775228 kubelet[3152]: E1106 00:26:42.775170 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.775228 kubelet[3152]: W1106 00:26:42.775181 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.775228 kubelet[3152]: E1106 00:26:42.775194 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.775505 kubelet[3152]: E1106 00:26:42.775456 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.775505 kubelet[3152]: W1106 00:26:42.775465 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.775505 kubelet[3152]: E1106 00:26:42.775476 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.775745 kubelet[3152]: E1106 00:26:42.775695 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.775745 kubelet[3152]: W1106 00:26:42.775703 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.775745 kubelet[3152]: E1106 00:26:42.775712 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.776001 kubelet[3152]: E1106 00:26:42.775952 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.776001 kubelet[3152]: W1106 00:26:42.775961 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.776001 kubelet[3152]: E1106 00:26:42.775971 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.776440 kubelet[3152]: E1106 00:26:42.776347 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.776616 kubelet[3152]: W1106 00:26:42.776510 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.776616 kubelet[3152]: E1106 00:26:42.776530 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.776974 kubelet[3152]: E1106 00:26:42.776961 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.777010 kubelet[3152]: W1106 00:26:42.776975 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.777010 kubelet[3152]: E1106 00:26:42.776988 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.779970 kubelet[3152]: E1106 00:26:42.779956 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.780130 kubelet[3152]: W1106 00:26:42.780046 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.780130 kubelet[3152]: E1106 00:26:42.780062 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.780391 kubelet[3152]: E1106 00:26:42.780359 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.780391 kubelet[3152]: W1106 00:26:42.780369 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.780391 kubelet[3152]: E1106 00:26:42.780381 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.780832 kubelet[3152]: E1106 00:26:42.780794 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.780832 kubelet[3152]: W1106 00:26:42.780806 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.780832 kubelet[3152]: E1106 00:26:42.780818 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.782906 kubelet[3152]: E1106 00:26:42.781449 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.783025 kubelet[3152]: W1106 00:26:42.782991 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.783025 kubelet[3152]: E1106 00:26:42.783012 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.784028 kubelet[3152]: E1106 00:26:42.783991 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.784028 kubelet[3152]: W1106 00:26:42.784004 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.784028 kubelet[3152]: E1106 00:26:42.784015 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.784305 kubelet[3152]: E1106 00:26:42.784297 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.784389 kubelet[3152]: W1106 00:26:42.784345 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.784389 kubelet[3152]: E1106 00:26:42.784356 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.784526 kubelet[3152]: E1106 00:26:42.784520 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.784572 kubelet[3152]: W1106 00:26:42.784555 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.784572 kubelet[3152]: E1106 00:26:42.784564 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.784804 kubelet[3152]: E1106 00:26:42.784774 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.785019 kubelet[3152]: W1106 00:26:42.784848 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.785019 kubelet[3152]: E1106 00:26:42.784861 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.785436 kubelet[3152]: E1106 00:26:42.785222 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.785820 kubelet[3152]: W1106 00:26:42.785690 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.785820 kubelet[3152]: E1106 00:26:42.785707 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.786163 kubelet[3152]: E1106 00:26:42.786129 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.786163 kubelet[3152]: W1106 00:26:42.786140 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.786163 kubelet[3152]: E1106 00:26:42.786154 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.787059 kubelet[3152]: E1106 00:26:42.786995 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.787809 kubelet[3152]: W1106 00:26:42.787631 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.787809 kubelet[3152]: E1106 00:26:42.787650 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.788103 kubelet[3152]: E1106 00:26:42.788064 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.788103 kubelet[3152]: W1106 00:26:42.788075 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.788103 kubelet[3152]: E1106 00:26:42.788085 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.788295 kubelet[3152]: E1106 00:26:42.788275 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.788295 kubelet[3152]: W1106 00:26:42.788281 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.788295 kubelet[3152]: E1106 00:26:42.788288 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.788469 kubelet[3152]: E1106 00:26:42.788450 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.788513 kubelet[3152]: W1106 00:26:42.788507 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.788545 kubelet[3152]: E1106 00:26:42.788540 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.789903 kubelet[3152]: E1106 00:26:42.788716 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.789903 kubelet[3152]: W1106 00:26:42.789682 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.789903 kubelet[3152]: E1106 00:26:42.789702 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.790259 kubelet[3152]: E1106 00:26:42.790245 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.790324 kubelet[3152]: W1106 00:26:42.790314 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.790463 kubelet[3152]: E1106 00:26:42.790453 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.790782 kubelet[3152]: E1106 00:26:42.790770 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.790849 kubelet[3152]: W1106 00:26:42.790841 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.790917 kubelet[3152]: E1106 00:26:42.790908 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:42.791169 kubelet[3152]: E1106 00:26:42.791126 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:42.791169 kubelet[3152]: W1106 00:26:42.791136 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:42.791169 kubelet[3152]: E1106 00:26:42.791149 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:43.626396 kubelet[3152]: E1106 00:26:43.626359 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:26:43.683087 containerd[1686]: time="2025-11-06T00:26:43.683051013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:43.685471 containerd[1686]: time="2025-11-06T00:26:43.685382091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:26:43.687946 containerd[1686]: time="2025-11-06T00:26:43.687921036Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:43.692983 containerd[1686]: time="2025-11-06T00:26:43.692950194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:43.693464 containerd[1686]: time="2025-11-06T00:26:43.693443540Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.284283524s" Nov 6 00:26:43.693534 containerd[1686]: time="2025-11-06T00:26:43.693523479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:26:43.700502 containerd[1686]: time="2025-11-06T00:26:43.700479819Z" level=info msg="CreateContainer within sandbox \"8b1adec105a69c348bf096f85a06931f873d391fa1fbfe13007cc6f67bff8e5c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:26:43.716928 containerd[1686]: time="2025-11-06T00:26:43.715040981Z" level=info msg="Container c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:43.722411 kubelet[3152]: I1106 00:26:43.722391 3152 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:26:43.735055 containerd[1686]: time="2025-11-06T00:26:43.735029743Z" level=info msg="CreateContainer within sandbox \"8b1adec105a69c348bf096f85a06931f873d391fa1fbfe13007cc6f67bff8e5c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256\"" Nov 6 00:26:43.735487 containerd[1686]: time="2025-11-06T00:26:43.735459634Z" level=info msg="StartContainer for \"c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256\"" Nov 6 00:26:43.736791 containerd[1686]: time="2025-11-06T00:26:43.736767599Z" level=info msg="connecting to shim c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256" address="unix:///run/containerd/s/c54408927a53018b16022b3e57d59c5d7ef4fa776ae050523e4b2e4bf17ee413" protocol=ttrpc version=3 Nov 6 00:26:43.756023 systemd[1]: Started cri-containerd-c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256.scope - libcontainer container c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256. Nov 6 00:26:43.784969 kubelet[3152]: E1106 00:26:43.784905 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:43.784969 kubelet[3152]: W1106 00:26:43.784928 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:43.785634 kubelet[3152]: E1106 00:26:43.785094 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:43.785990 kubelet[3152]: E1106 00:26:43.785871 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:43.785990 kubelet[3152]: W1106 00:26:43.785920 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:43.785990 kubelet[3152]: E1106 00:26:43.785936 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:43.786101 containerd[1686]: time="2025-11-06T00:26:43.786066782Z" level=info msg="StartContainer for \"c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256\" returns successfully" Nov 6 00:26:43.786456 kubelet[3152]: E1106 00:26:43.786445 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:43.786626 kubelet[3152]: W1106 00:26:43.786617 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:43.786686 kubelet[3152]: E1106 00:26:43.786666 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:43.787324 kubelet[3152]: E1106 00:26:43.787274 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:43.787324 kubelet[3152]: W1106 00:26:43.787286 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:43.787324 kubelet[3152]: E1106 00:26:43.787297 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:43.788298 kubelet[3152]: E1106 00:26:43.788187 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:43.788298 kubelet[3152]: W1106 00:26:43.788197 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:43.788540 kubelet[3152]: E1106 00:26:43.788504 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:43.788911 kubelet[3152]: E1106 00:26:43.788871 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:43.789024 kubelet[3152]: W1106 00:26:43.788978 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:43.789024 kubelet[3152]: E1106 00:26:43.788993 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:43.789231 kubelet[3152]: E1106 00:26:43.789191 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:43.789231 kubelet[3152]: W1106 00:26:43.789199 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:43.789231 kubelet[3152]: E1106 00:26:43.789207 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:43.789441 kubelet[3152]: E1106 00:26:43.789395 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:43.789441 kubelet[3152]: W1106 00:26:43.789401 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:43.789441 kubelet[3152]: E1106 00:26:43.789408 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:43.789655 kubelet[3152]: E1106 00:26:43.789608 3152 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:26:43.789655 kubelet[3152]: W1106 00:26:43.789622 3152 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:26:43.789655 kubelet[3152]: E1106 00:26:43.789631 3152 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:26:43.791752 systemd[1]: cri-containerd-c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256.scope: Deactivated successfully. Nov 6 00:26:43.795931 containerd[1686]: time="2025-11-06T00:26:43.795822268Z" level=info msg="received exit event container_id:\"c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256\" id:\"c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256\" pid:3819 exited_at:{seconds:1762388803 nanos:795534591}" Nov 6 00:26:43.796113 containerd[1686]: time="2025-11-06T00:26:43.796085877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256\" id:\"c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256\" pid:3819 exited_at:{seconds:1762388803 nanos:795534591}" Nov 6 00:26:43.811428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0fc838682346e7ab2d448a20c804216819a0836da77fbffdefdecd45ed9d256-rootfs.mount: Deactivated successfully. Nov 6 00:26:45.625829 kubelet[3152]: E1106 00:26:45.625777 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:26:46.734771 containerd[1686]: time="2025-11-06T00:26:46.734733809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:26:47.626501 kubelet[3152]: E1106 00:26:47.626460 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:26:49.626521 kubelet[3152]: E1106 00:26:49.626477 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:26:50.052379 containerd[1686]: time="2025-11-06T00:26:50.052341827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:50.054570 containerd[1686]: time="2025-11-06T00:26:50.054489432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:26:50.057400 containerd[1686]: time="2025-11-06T00:26:50.057375407Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:50.060912 containerd[1686]: time="2025-11-06T00:26:50.060612357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:50.061115 containerd[1686]: time="2025-11-06T00:26:50.061092065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.326315147s" Nov 6 00:26:50.061148 containerd[1686]: time="2025-11-06T00:26:50.061123676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:26:50.072073 containerd[1686]: time="2025-11-06T00:26:50.072036982Z" level=info msg="CreateContainer within sandbox \"8b1adec105a69c348bf096f85a06931f873d391fa1fbfe13007cc6f67bff8e5c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:26:50.091920 containerd[1686]: time="2025-11-06T00:26:50.090614073Z" level=info msg="Container 0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:50.092838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098021287.mount: Deactivated successfully. Nov 6 00:26:50.106164 containerd[1686]: time="2025-11-06T00:26:50.106141416Z" level=info msg="CreateContainer within sandbox \"8b1adec105a69c348bf096f85a06931f873d391fa1fbfe13007cc6f67bff8e5c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74\"" Nov 6 00:26:50.107850 containerd[1686]: time="2025-11-06T00:26:50.107386354Z" level=info msg="StartContainer for \"0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74\"" Nov 6 00:26:50.108727 containerd[1686]: time="2025-11-06T00:26:50.108667691Z" level=info msg="connecting to shim 0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74" address="unix:///run/containerd/s/c54408927a53018b16022b3e57d59c5d7ef4fa776ae050523e4b2e4bf17ee413" protocol=ttrpc version=3 Nov 6 00:26:50.128028 systemd[1]: Started cri-containerd-0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74.scope - libcontainer container 0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74. Nov 6 00:26:50.169008 containerd[1686]: time="2025-11-06T00:26:50.168980902Z" level=info msg="StartContainer for \"0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74\" returns successfully" Nov 6 00:26:51.279721 containerd[1686]: time="2025-11-06T00:26:51.279656932Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:26:51.281378 systemd[1]: cri-containerd-0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74.scope: Deactivated successfully. Nov 6 00:26:51.281623 systemd[1]: cri-containerd-0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74.scope: Consumed 388ms CPU time, 194.5M memory peak, 171.3M written to disk. Nov 6 00:26:51.283067 containerd[1686]: time="2025-11-06T00:26:51.283005910Z" level=info msg="received exit event container_id:\"0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74\" id:\"0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74\" pid:3891 exited_at:{seconds:1762388811 nanos:282772949}" Nov 6 00:26:51.283313 containerd[1686]: time="2025-11-06T00:26:51.283292468Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74\" id:\"0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74\" pid:3891 exited_at:{seconds:1762388811 nanos:282772949}" Nov 6 00:26:51.300429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f0c2357bc56485bc3e7fd7a7738ee3af194d41e4dcfc1180dcccc284cbded74-rootfs.mount: Deactivated successfully. Nov 6 00:26:51.349402 kubelet[3152]: I1106 00:26:51.349373 3152 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 6 00:26:51.559470 systemd[1]: Created slice kubepods-besteffort-podc9883439_5e85_428a_8c5e_1baa916caf76.slice - libcontainer container kubepods-besteffort-podc9883439_5e85_428a_8c5e_1baa916caf76.slice. Nov 6 00:26:51.728804 systemd[1]: Created slice kubepods-besteffort-podc86a3ffd_cf2f_4e08_9736_f5e39ae366f1.slice - libcontainer container kubepods-besteffort-podc86a3ffd_cf2f_4e08_9736_f5e39ae366f1.slice. Nov 6 00:26:51.733629 systemd[1]: Created slice kubepods-besteffort-pod9c757a1d_95f3_4cbd_9adf_b65065b2eb8c.slice - libcontainer container kubepods-besteffort-pod9c757a1d_95f3_4cbd_9adf_b65065b2eb8c.slice. Nov 6 00:26:51.803502 kubelet[3152]: I1106 00:26:51.740852 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c9883439-5e85-428a-8c5e-1baa916caf76-calico-apiserver-certs\") pod \"calico-apiserver-8559b785f9-n2pht\" (UID: \"c9883439-5e85-428a-8c5e-1baa916caf76\") " pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" Nov 6 00:26:51.803502 kubelet[3152]: I1106 00:26:51.740877 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fnv\" (UniqueName: \"kubernetes.io/projected/c9883439-5e85-428a-8c5e-1baa916caf76-kube-api-access-s9fnv\") pod \"calico-apiserver-8559b785f9-n2pht\" (UID: \"c9883439-5e85-428a-8c5e-1baa916caf76\") " pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" Nov 6 00:26:51.841940 kubelet[3152]: I1106 00:26:51.841272 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c86a3ffd-cf2f-4e08-9736-f5e39ae366f1-calico-apiserver-certs\") pod \"calico-apiserver-8559b785f9-fhvmw\" (UID: \"c86a3ffd-cf2f-4e08-9736-f5e39ae366f1\") " pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" Nov 6 00:26:51.841940 kubelet[3152]: I1106 00:26:51.841347 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdvr7\" (UniqueName: \"kubernetes.io/projected/c86a3ffd-cf2f-4e08-9736-f5e39ae366f1-kube-api-access-rdvr7\") pod \"calico-apiserver-8559b785f9-fhvmw\" (UID: \"c86a3ffd-cf2f-4e08-9736-f5e39ae366f1\") " pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" Nov 6 00:26:52.110849 containerd[1686]: time="2025-11-06T00:26:52.110403524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4j9vt,Uid:9c757a1d-95f3-4cbd-9adf-b65065b2eb8c,Namespace:calico-system,Attempt:0,}" Nov 6 00:26:52.127732 systemd[1]: Created slice kubepods-besteffort-pod3d2f27f2_55bd_484a_8c3a_4032ff9011c3.slice - libcontainer container kubepods-besteffort-pod3d2f27f2_55bd_484a_8c3a_4032ff9011c3.slice. Nov 6 00:26:52.166460 systemd[1]: Created slice kubepods-besteffort-pod4902c4e4_3977_4e0d_b87b_89acc6926de6.slice - libcontainer container kubepods-besteffort-pod4902c4e4_3977_4e0d_b87b_89acc6926de6.slice. Nov 6 00:26:52.210060 systemd[1]: Created slice kubepods-besteffort-pod797f66e1_c3e4_4d4b_8032_18c5d22ec25c.slice - libcontainer container kubepods-besteffort-pod797f66e1_c3e4_4d4b_8032_18c5d22ec25c.slice. Nov 6 00:26:52.228531 systemd[1]: Created slice kubepods-burstable-podfbb02403_83ba_4851_9c6d_3c3f92019d78.slice - libcontainer container kubepods-burstable-podfbb02403_83ba_4851_9c6d_3c3f92019d78.slice. Nov 6 00:26:52.244531 kubelet[3152]: I1106 00:26:52.244489 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-whisker-ca-bundle\") pod \"whisker-78f5784c78-9zfhr\" (UID: \"3d2f27f2-55bd-484a-8c3a-4032ff9011c3\") " pod="calico-system/whisker-78f5784c78-9zfhr" Nov 6 00:26:52.246431 kubelet[3152]: I1106 00:26:52.246262 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-whisker-backend-key-pair\") pod \"whisker-78f5784c78-9zfhr\" (UID: \"3d2f27f2-55bd-484a-8c3a-4032ff9011c3\") " pod="calico-system/whisker-78f5784c78-9zfhr" Nov 6 00:26:52.246431 kubelet[3152]: I1106 00:26:52.246315 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swg6m\" (UniqueName: \"kubernetes.io/projected/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-kube-api-access-swg6m\") pod \"whisker-78f5784c78-9zfhr\" (UID: \"3d2f27f2-55bd-484a-8c3a-4032ff9011c3\") " pod="calico-system/whisker-78f5784c78-9zfhr" Nov 6 00:26:52.250573 systemd[1]: Created slice kubepods-burstable-pode48dba36_29c2_4eef_9ba0_2dd98198c6d2.slice - libcontainer container kubepods-burstable-pode48dba36_29c2_4eef_9ba0_2dd98198c6d2.slice. Nov 6 00:26:52.255705 systemd[1]: Created slice kubepods-besteffort-pod62834f18_0344_4626_bcdf_b650cdc6187d.slice - libcontainer container kubepods-besteffort-pod62834f18_0344_4626_bcdf_b650cdc6187d.slice. Nov 6 00:26:52.270024 containerd[1686]: time="2025-11-06T00:26:52.269987353Z" level=error msg="Failed to destroy network for sandbox \"21f96bb25a447df261aa85d825bf9549c4107c4d4cb3699549231a2b28384d33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.273846 containerd[1686]: time="2025-11-06T00:26:52.273809354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4j9vt,Uid:9c757a1d-95f3-4cbd-9adf-b65065b2eb8c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21f96bb25a447df261aa85d825bf9549c4107c4d4cb3699549231a2b28384d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.274136 kubelet[3152]: E1106 00:26:52.274098 3152 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21f96bb25a447df261aa85d825bf9549c4107c4d4cb3699549231a2b28384d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.274194 kubelet[3152]: E1106 00:26:52.274160 3152 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21f96bb25a447df261aa85d825bf9549c4107c4d4cb3699549231a2b28384d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4j9vt" Nov 6 00:26:52.274194 kubelet[3152]: E1106 00:26:52.274179 3152 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21f96bb25a447df261aa85d825bf9549c4107c4d4cb3699549231a2b28384d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4j9vt" Nov 6 00:26:52.274260 kubelet[3152]: E1106 00:26:52.274231 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4j9vt_calico-system(9c757a1d-95f3-4cbd-9adf-b65065b2eb8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4j9vt_calico-system(9c757a1d-95f3-4cbd-9adf-b65065b2eb8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21f96bb25a447df261aa85d825bf9549c4107c4d4cb3699549231a2b28384d33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:26:52.314697 containerd[1686]: time="2025-11-06T00:26:52.314655957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559b785f9-n2pht,Uid:c9883439-5e85-428a-8c5e-1baa916caf76,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:26:52.348907 kubelet[3152]: I1106 00:26:52.346911 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdz7w\" (UniqueName: \"kubernetes.io/projected/4902c4e4-3977-4e0d-b87b-89acc6926de6-kube-api-access-qdz7w\") pod \"goldmane-7c778bb748-2k7xr\" (UID: \"4902c4e4-3977-4e0d-b87b-89acc6926de6\") " pod="calico-system/goldmane-7c778bb748-2k7xr" Nov 6 00:26:52.348907 kubelet[3152]: I1106 00:26:52.346948 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9z75\" (UniqueName: \"kubernetes.io/projected/797f66e1-c3e4-4d4b-8032-18c5d22ec25c-kube-api-access-f9z75\") pod \"calico-apiserver-6546975659-nxnh9\" (UID: \"797f66e1-c3e4-4d4b-8032-18c5d22ec25c\") " pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" Nov 6 00:26:52.348907 kubelet[3152]: I1106 00:26:52.346981 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fkc7\" (UniqueName: \"kubernetes.io/projected/e48dba36-29c2-4eef-9ba0-2dd98198c6d2-kube-api-access-9fkc7\") pod \"coredns-66bc5c9577-92pgk\" (UID: \"e48dba36-29c2-4eef-9ba0-2dd98198c6d2\") " pod="kube-system/coredns-66bc5c9577-92pgk" Nov 6 00:26:52.348907 kubelet[3152]: I1106 00:26:52.347015 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4902c4e4-3977-4e0d-b87b-89acc6926de6-config\") pod \"goldmane-7c778bb748-2k7xr\" (UID: \"4902c4e4-3977-4e0d-b87b-89acc6926de6\") " pod="calico-system/goldmane-7c778bb748-2k7xr" Nov 6 00:26:52.348907 kubelet[3152]: I1106 00:26:52.347033 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4902c4e4-3977-4e0d-b87b-89acc6926de6-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-2k7xr\" (UID: \"4902c4e4-3977-4e0d-b87b-89acc6926de6\") " pod="calico-system/goldmane-7c778bb748-2k7xr" Nov 6 00:26:52.349085 kubelet[3152]: I1106 00:26:52.347065 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62834f18-0344-4626-bcdf-b650cdc6187d-tigera-ca-bundle\") pod \"calico-kube-controllers-cdd9bb7bc-2b49s\" (UID: \"62834f18-0344-4626-bcdf-b650cdc6187d\") " pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" Nov 6 00:26:52.349085 kubelet[3152]: I1106 00:26:52.347085 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e48dba36-29c2-4eef-9ba0-2dd98198c6d2-config-volume\") pod \"coredns-66bc5c9577-92pgk\" (UID: \"e48dba36-29c2-4eef-9ba0-2dd98198c6d2\") " pod="kube-system/coredns-66bc5c9577-92pgk" Nov 6 00:26:52.349085 kubelet[3152]: I1106 00:26:52.347102 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/797f66e1-c3e4-4d4b-8032-18c5d22ec25c-calico-apiserver-certs\") pod \"calico-apiserver-6546975659-nxnh9\" (UID: \"797f66e1-c3e4-4d4b-8032-18c5d22ec25c\") " pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" Nov 6 00:26:52.349085 kubelet[3152]: I1106 00:26:52.347131 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hhl\" (UniqueName: \"kubernetes.io/projected/fbb02403-83ba-4851-9c6d-3c3f92019d78-kube-api-access-77hhl\") pod \"coredns-66bc5c9577-fhgxb\" (UID: \"fbb02403-83ba-4851-9c6d-3c3f92019d78\") " pod="kube-system/coredns-66bc5c9577-fhgxb" Nov 6 00:26:52.349085 kubelet[3152]: I1106 00:26:52.347151 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vljp\" (UniqueName: \"kubernetes.io/projected/62834f18-0344-4626-bcdf-b650cdc6187d-kube-api-access-8vljp\") pod \"calico-kube-controllers-cdd9bb7bc-2b49s\" (UID: \"62834f18-0344-4626-bcdf-b650cdc6187d\") " pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" Nov 6 00:26:52.349177 kubelet[3152]: I1106 00:26:52.347207 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4902c4e4-3977-4e0d-b87b-89acc6926de6-goldmane-key-pair\") pod \"goldmane-7c778bb748-2k7xr\" (UID: \"4902c4e4-3977-4e0d-b87b-89acc6926de6\") " pod="calico-system/goldmane-7c778bb748-2k7xr" Nov 6 00:26:52.349177 kubelet[3152]: I1106 00:26:52.347233 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbb02403-83ba-4851-9c6d-3c3f92019d78-config-volume\") pod \"coredns-66bc5c9577-fhgxb\" (UID: \"fbb02403-83ba-4851-9c6d-3c3f92019d78\") " pod="kube-system/coredns-66bc5c9577-fhgxb" Nov 6 00:26:52.374931 containerd[1686]: time="2025-11-06T00:26:52.374330300Z" level=error msg="Failed to destroy network for sandbox \"b9af942b11b94f940a85033c21ec5fb33261146eae6703132055856335a1ee39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.376366 systemd[1]: run-netns-cni\x2db63e33df\x2d4bf9\x2d9ce3\x2d8645\x2d5e01f2d2859e.mount: Deactivated successfully. Nov 6 00:26:52.378589 containerd[1686]: time="2025-11-06T00:26:52.378535110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559b785f9-n2pht,Uid:c9883439-5e85-428a-8c5e-1baa916caf76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9af942b11b94f940a85033c21ec5fb33261146eae6703132055856335a1ee39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.378765 kubelet[3152]: E1106 00:26:52.378727 3152 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9af942b11b94f940a85033c21ec5fb33261146eae6703132055856335a1ee39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.379498 kubelet[3152]: E1106 00:26:52.378781 3152 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9af942b11b94f940a85033c21ec5fb33261146eae6703132055856335a1ee39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" Nov 6 00:26:52.379498 kubelet[3152]: E1106 00:26:52.378802 3152 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9af942b11b94f940a85033c21ec5fb33261146eae6703132055856335a1ee39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" Nov 6 00:26:52.379498 kubelet[3152]: E1106 00:26:52.378852 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8559b785f9-n2pht_calico-apiserver(c9883439-5e85-428a-8c5e-1baa916caf76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8559b785f9-n2pht_calico-apiserver(c9883439-5e85-428a-8c5e-1baa916caf76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9af942b11b94f940a85033c21ec5fb33261146eae6703132055856335a1ee39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:26:52.408564 containerd[1686]: time="2025-11-06T00:26:52.408540088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559b785f9-fhvmw,Uid:c86a3ffd-cf2f-4e08-9736-f5e39ae366f1,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:26:52.442056 containerd[1686]: time="2025-11-06T00:26:52.441923736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78f5784c78-9zfhr,Uid:3d2f27f2-55bd-484a-8c3a-4032ff9011c3,Namespace:calico-system,Attempt:0,}" Nov 6 00:26:52.480005 containerd[1686]: time="2025-11-06T00:26:52.479974150Z" level=error msg="Failed to destroy network for sandbox \"d30f93dd825d44161a0d930e57e7bfc372abad3bddb06763bd46b2f8bb649ae8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.483736 containerd[1686]: time="2025-11-06T00:26:52.483623871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559b785f9-fhvmw,Uid:c86a3ffd-cf2f-4e08-9736-f5e39ae366f1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30f93dd825d44161a0d930e57e7bfc372abad3bddb06763bd46b2f8bb649ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.483863 kubelet[3152]: E1106 00:26:52.483828 3152 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30f93dd825d44161a0d930e57e7bfc372abad3bddb06763bd46b2f8bb649ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.485203 kubelet[3152]: E1106 00:26:52.483870 3152 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30f93dd825d44161a0d930e57e7bfc372abad3bddb06763bd46b2f8bb649ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" Nov 6 00:26:52.485203 kubelet[3152]: E1106 00:26:52.483979 3152 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30f93dd825d44161a0d930e57e7bfc372abad3bddb06763bd46b2f8bb649ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" Nov 6 00:26:52.485203 kubelet[3152]: E1106 00:26:52.484032 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8559b785f9-fhvmw_calico-apiserver(c86a3ffd-cf2f-4e08-9736-f5e39ae366f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8559b785f9-fhvmw_calico-apiserver(c86a3ffd-cf2f-4e08-9736-f5e39ae366f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d30f93dd825d44161a0d930e57e7bfc372abad3bddb06763bd46b2f8bb649ae8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:26:52.501577 containerd[1686]: time="2025-11-06T00:26:52.501541312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2k7xr,Uid:4902c4e4-3977-4e0d-b87b-89acc6926de6,Namespace:calico-system,Attempt:0,}" Nov 6 00:26:52.507110 containerd[1686]: time="2025-11-06T00:26:52.507079622Z" level=error msg="Failed to destroy network for sandbox \"1af4323b8ef8619c2aa87db2aa318ca19d6e4d3b7c5c27f35df92a90cdd58d82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.509907 containerd[1686]: time="2025-11-06T00:26:52.509734825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78f5784c78-9zfhr,Uid:3d2f27f2-55bd-484a-8c3a-4032ff9011c3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af4323b8ef8619c2aa87db2aa318ca19d6e4d3b7c5c27f35df92a90cdd58d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.510277 kubelet[3152]: E1106 00:26:52.510177 3152 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af4323b8ef8619c2aa87db2aa318ca19d6e4d3b7c5c27f35df92a90cdd58d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.510277 kubelet[3152]: E1106 00:26:52.510230 3152 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af4323b8ef8619c2aa87db2aa318ca19d6e4d3b7c5c27f35df92a90cdd58d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78f5784c78-9zfhr" Nov 6 00:26:52.510277 kubelet[3152]: E1106 00:26:52.510249 3152 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af4323b8ef8619c2aa87db2aa318ca19d6e4d3b7c5c27f35df92a90cdd58d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78f5784c78-9zfhr" Nov 6 00:26:52.510572 kubelet[3152]: E1106 00:26:52.510420 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78f5784c78-9zfhr_calico-system(3d2f27f2-55bd-484a-8c3a-4032ff9011c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78f5784c78-9zfhr_calico-system(3d2f27f2-55bd-484a-8c3a-4032ff9011c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1af4323b8ef8619c2aa87db2aa318ca19d6e4d3b7c5c27f35df92a90cdd58d82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78f5784c78-9zfhr" podUID="3d2f27f2-55bd-484a-8c3a-4032ff9011c3" Nov 6 00:26:52.530901 containerd[1686]: time="2025-11-06T00:26:52.529362467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546975659-nxnh9,Uid:797f66e1-c3e4-4d4b-8032-18c5d22ec25c,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:26:52.541908 containerd[1686]: time="2025-11-06T00:26:52.541866708Z" level=error msg="Failed to destroy network for sandbox \"a4a12a22aef5aa2589fbf6a4c99d80128408dd0b00c79344e391abf46f674a38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.545501 containerd[1686]: time="2025-11-06T00:26:52.545422138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2k7xr,Uid:4902c4e4-3977-4e0d-b87b-89acc6926de6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a12a22aef5aa2589fbf6a4c99d80128408dd0b00c79344e391abf46f674a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.546387 kubelet[3152]: E1106 00:26:52.545726 3152 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a12a22aef5aa2589fbf6a4c99d80128408dd0b00c79344e391abf46f674a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.546387 kubelet[3152]: E1106 00:26:52.545769 3152 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a12a22aef5aa2589fbf6a4c99d80128408dd0b00c79344e391abf46f674a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-2k7xr" Nov 6 00:26:52.546387 kubelet[3152]: E1106 00:26:52.545789 3152 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a12a22aef5aa2589fbf6a4c99d80128408dd0b00c79344e391abf46f674a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-2k7xr" Nov 6 00:26:52.546527 kubelet[3152]: E1106 00:26:52.545836 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-2k7xr_calico-system(4902c4e4-3977-4e0d-b87b-89acc6926de6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-2k7xr_calico-system(4902c4e4-3977-4e0d-b87b-89acc6926de6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4a12a22aef5aa2589fbf6a4c99d80128408dd0b00c79344e391abf46f674a38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:26:52.548314 containerd[1686]: time="2025-11-06T00:26:52.548282185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fhgxb,Uid:fbb02403-83ba-4851-9c6d-3c3f92019d78,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:52.563460 containerd[1686]: time="2025-11-06T00:26:52.563268264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-92pgk,Uid:e48dba36-29c2-4eef-9ba0-2dd98198c6d2,Namespace:kube-system,Attempt:0,}" Nov 6 00:26:52.566568 containerd[1686]: time="2025-11-06T00:26:52.566543076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cdd9bb7bc-2b49s,Uid:62834f18-0344-4626-bcdf-b650cdc6187d,Namespace:calico-system,Attempt:0,}" Nov 6 00:26:52.626778 containerd[1686]: time="2025-11-06T00:26:52.626010314Z" level=error msg="Failed to destroy network for sandbox \"2cf81a179b21f2b0add0c66f9d15d2616d626c487f4cc91d9c1581771d115bcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.630685 containerd[1686]: time="2025-11-06T00:26:52.630657071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546975659-nxnh9,Uid:797f66e1-c3e4-4d4b-8032-18c5d22ec25c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf81a179b21f2b0add0c66f9d15d2616d626c487f4cc91d9c1581771d115bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.630954 kubelet[3152]: E1106 00:26:52.630925 3152 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf81a179b21f2b0add0c66f9d15d2616d626c487f4cc91d9c1581771d115bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.631018 kubelet[3152]: E1106 00:26:52.630972 3152 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf81a179b21f2b0add0c66f9d15d2616d626c487f4cc91d9c1581771d115bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" Nov 6 00:26:52.631018 kubelet[3152]: E1106 00:26:52.630992 3152 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf81a179b21f2b0add0c66f9d15d2616d626c487f4cc91d9c1581771d115bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" Nov 6 00:26:52.631068 kubelet[3152]: E1106 00:26:52.631040 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6546975659-nxnh9_calico-apiserver(797f66e1-c3e4-4d4b-8032-18c5d22ec25c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6546975659-nxnh9_calico-apiserver(797f66e1-c3e4-4d4b-8032-18c5d22ec25c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cf81a179b21f2b0add0c66f9d15d2616d626c487f4cc91d9c1581771d115bcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:26:52.637003 containerd[1686]: time="2025-11-06T00:26:52.636969163Z" level=error msg="Failed to destroy network for sandbox \"7881f2f16069e1a26b2a415ce15d18bb6a5dfe396b9243cea411daec2552010a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.641405 containerd[1686]: time="2025-11-06T00:26:52.641368831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-92pgk,Uid:e48dba36-29c2-4eef-9ba0-2dd98198c6d2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7881f2f16069e1a26b2a415ce15d18bb6a5dfe396b9243cea411daec2552010a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.641800 kubelet[3152]: E1106 00:26:52.641771 3152 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7881f2f16069e1a26b2a415ce15d18bb6a5dfe396b9243cea411daec2552010a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.641877 kubelet[3152]: E1106 00:26:52.641819 3152 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7881f2f16069e1a26b2a415ce15d18bb6a5dfe396b9243cea411daec2552010a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-92pgk" Nov 6 00:26:52.641877 kubelet[3152]: E1106 00:26:52.641840 3152 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7881f2f16069e1a26b2a415ce15d18bb6a5dfe396b9243cea411daec2552010a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-92pgk" Nov 6 00:26:52.642466 kubelet[3152]: E1106 00:26:52.642117 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-92pgk_kube-system(e48dba36-29c2-4eef-9ba0-2dd98198c6d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-92pgk_kube-system(e48dba36-29c2-4eef-9ba0-2dd98198c6d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7881f2f16069e1a26b2a415ce15d18bb6a5dfe396b9243cea411daec2552010a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-92pgk" podUID="e48dba36-29c2-4eef-9ba0-2dd98198c6d2" Nov 6 00:26:52.643985 containerd[1686]: time="2025-11-06T00:26:52.643952551Z" level=error msg="Failed to destroy network for sandbox \"e7b85908acf37b9ef1365afedfe7283896df54608c58ce5ba564c1579ca30519\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.646559 containerd[1686]: time="2025-11-06T00:26:52.646515353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fhgxb,Uid:fbb02403-83ba-4851-9c6d-3c3f92019d78,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b85908acf37b9ef1365afedfe7283896df54608c58ce5ba564c1579ca30519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.646825 kubelet[3152]: E1106 00:26:52.646799 3152 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b85908acf37b9ef1365afedfe7283896df54608c58ce5ba564c1579ca30519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.646901 kubelet[3152]: E1106 00:26:52.646839 3152 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b85908acf37b9ef1365afedfe7283896df54608c58ce5ba564c1579ca30519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fhgxb" Nov 6 00:26:52.646901 kubelet[3152]: E1106 00:26:52.646856 3152 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b85908acf37b9ef1365afedfe7283896df54608c58ce5ba564c1579ca30519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fhgxb" Nov 6 00:26:52.647975 kubelet[3152]: E1106 00:26:52.647941 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fhgxb_kube-system(fbb02403-83ba-4851-9c6d-3c3f92019d78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fhgxb_kube-system(fbb02403-83ba-4851-9c6d-3c3f92019d78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7b85908acf37b9ef1365afedfe7283896df54608c58ce5ba564c1579ca30519\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fhgxb" podUID="fbb02403-83ba-4851-9c6d-3c3f92019d78" Nov 6 00:26:52.659709 containerd[1686]: time="2025-11-06T00:26:52.659670239Z" level=error msg="Failed to destroy network for sandbox \"b7ed3e06e77359e863ea7360b222be996d86456bb5aa29037449aedcd443b1c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.662416 containerd[1686]: time="2025-11-06T00:26:52.662385528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cdd9bb7bc-2b49s,Uid:62834f18-0344-4626-bcdf-b650cdc6187d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7ed3e06e77359e863ea7360b222be996d86456bb5aa29037449aedcd443b1c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.662592 kubelet[3152]: E1106 00:26:52.662571 3152 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7ed3e06e77359e863ea7360b222be996d86456bb5aa29037449aedcd443b1c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:26:52.662646 kubelet[3152]: E1106 00:26:52.662605 3152 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7ed3e06e77359e863ea7360b222be996d86456bb5aa29037449aedcd443b1c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" Nov 6 00:26:52.662646 kubelet[3152]: E1106 00:26:52.662625 3152 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7ed3e06e77359e863ea7360b222be996d86456bb5aa29037449aedcd443b1c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" Nov 6 00:26:52.662699 kubelet[3152]: E1106 00:26:52.662670 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cdd9bb7bc-2b49s_calico-system(62834f18-0344-4626-bcdf-b650cdc6187d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cdd9bb7bc-2b49s_calico-system(62834f18-0344-4626-bcdf-b650cdc6187d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7ed3e06e77359e863ea7360b222be996d86456bb5aa29037449aedcd443b1c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:26:52.754934 containerd[1686]: time="2025-11-06T00:26:52.754596464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:26:57.137700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998068838.mount: Deactivated successfully. Nov 6 00:26:57.166422 containerd[1686]: time="2025-11-06T00:26:57.166382770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:57.168365 containerd[1686]: time="2025-11-06T00:26:57.168340522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:26:57.170784 containerd[1686]: time="2025-11-06T00:26:57.170746634Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:57.173824 containerd[1686]: time="2025-11-06T00:26:57.173785430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:26:57.174066 containerd[1686]: time="2025-11-06T00:26:57.174045847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.41941433s" Nov 6 00:26:57.174140 containerd[1686]: time="2025-11-06T00:26:57.174130081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:26:57.190252 containerd[1686]: time="2025-11-06T00:26:57.190223524Z" level=info msg="CreateContainer within sandbox \"8b1adec105a69c348bf096f85a06931f873d391fa1fbfe13007cc6f67bff8e5c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:26:57.208254 containerd[1686]: time="2025-11-06T00:26:57.206640601Z" level=info msg="Container 80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:57.222714 containerd[1686]: time="2025-11-06T00:26:57.222685087Z" level=info msg="CreateContainer within sandbox \"8b1adec105a69c348bf096f85a06931f873d391fa1fbfe13007cc6f67bff8e5c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981\"" Nov 6 00:26:57.222994 containerd[1686]: time="2025-11-06T00:26:57.222980572Z" level=info msg="StartContainer for \"80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981\"" Nov 6 00:26:57.224670 containerd[1686]: time="2025-11-06T00:26:57.224643377Z" level=info msg="connecting to shim 80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981" address="unix:///run/containerd/s/c54408927a53018b16022b3e57d59c5d7ef4fa776ae050523e4b2e4bf17ee413" protocol=ttrpc version=3 Nov 6 00:26:57.244029 systemd[1]: Started cri-containerd-80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981.scope - libcontainer container 80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981. Nov 6 00:26:57.273254 containerd[1686]: time="2025-11-06T00:26:57.273232862Z" level=info msg="StartContainer for \"80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981\" returns successfully" Nov 6 00:26:57.703976 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:26:57.704234 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:26:57.799721 kubelet[3152]: I1106 00:26:57.799191 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hrgz8" podStartSLOduration=1.969212365 podStartE2EDuration="18.799174585s" podCreationTimestamp="2025-11-06 00:26:39 +0000 UTC" firstStartedPulling="2025-11-06 00:26:40.344745388 +0000 UTC m=+19.805972767" lastFinishedPulling="2025-11-06 00:26:57.17470761 +0000 UTC m=+36.635934987" observedRunningTime="2025-11-06 00:26:57.796515015 +0000 UTC m=+37.257742397" watchObservedRunningTime="2025-11-06 00:26:57.799174585 +0000 UTC m=+37.260401969" Nov 6 00:26:57.980959 kubelet[3152]: I1106 00:26:57.980818 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-whisker-ca-bundle\") pod \"3d2f27f2-55bd-484a-8c3a-4032ff9011c3\" (UID: \"3d2f27f2-55bd-484a-8c3a-4032ff9011c3\") " Nov 6 00:26:57.980959 kubelet[3152]: I1106 00:26:57.980861 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-whisker-backend-key-pair\") pod \"3d2f27f2-55bd-484a-8c3a-4032ff9011c3\" (UID: \"3d2f27f2-55bd-484a-8c3a-4032ff9011c3\") " Nov 6 00:26:57.981347 kubelet[3152]: I1106 00:26:57.981332 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3d2f27f2-55bd-484a-8c3a-4032ff9011c3" (UID: "3d2f27f2-55bd-484a-8c3a-4032ff9011c3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:26:57.981594 kubelet[3152]: I1106 00:26:57.981392 3152 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swg6m\" (UniqueName: \"kubernetes.io/projected/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-kube-api-access-swg6m\") pod \"3d2f27f2-55bd-484a-8c3a-4032ff9011c3\" (UID: \"3d2f27f2-55bd-484a-8c3a-4032ff9011c3\") " Nov 6 00:26:57.981706 kubelet[3152]: I1106 00:26:57.981697 3152 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-whisker-ca-bundle\") on node \"ci-4459.1.0-n-3bced53249\" DevicePath \"\"" Nov 6 00:26:57.985330 kubelet[3152]: I1106 00:26:57.985305 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-kube-api-access-swg6m" (OuterVolumeSpecName: "kube-api-access-swg6m") pod "3d2f27f2-55bd-484a-8c3a-4032ff9011c3" (UID: "3d2f27f2-55bd-484a-8c3a-4032ff9011c3"). InnerVolumeSpecName "kube-api-access-swg6m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:26:57.985765 kubelet[3152]: I1106 00:26:57.985735 3152 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3d2f27f2-55bd-484a-8c3a-4032ff9011c3" (UID: "3d2f27f2-55bd-484a-8c3a-4032ff9011c3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:26:57.991559 containerd[1686]: time="2025-11-06T00:26:57.991528476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981\" id:\"55a131ec78d2ac88347c9cace1e4bd543ec629009f011a56e816fcb082fdea1a\" pid:4238 exit_status:1 exited_at:{seconds:1762388817 nanos:991104187}" Nov 6 00:26:58.082721 kubelet[3152]: I1106 00:26:58.082699 3152 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-whisker-backend-key-pair\") on node \"ci-4459.1.0-n-3bced53249\" DevicePath \"\"" Nov 6 00:26:58.082782 kubelet[3152]: I1106 00:26:58.082732 3152 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-swg6m\" (UniqueName: \"kubernetes.io/projected/3d2f27f2-55bd-484a-8c3a-4032ff9011c3-kube-api-access-swg6m\") on node \"ci-4459.1.0-n-3bced53249\" DevicePath \"\"" Nov 6 00:26:58.135675 systemd[1]: var-lib-kubelet-pods-3d2f27f2\x2d55bd\x2d484a\x2d8c3a\x2d4032ff9011c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dswg6m.mount: Deactivated successfully. Nov 6 00:26:58.135782 systemd[1]: var-lib-kubelet-pods-3d2f27f2\x2d55bd\x2d484a\x2d8c3a\x2d4032ff9011c3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:26:58.631872 systemd[1]: Removed slice kubepods-besteffort-pod3d2f27f2_55bd_484a_8c3a_4032ff9011c3.slice - libcontainer container kubepods-besteffort-pod3d2f27f2_55bd_484a_8c3a_4032ff9011c3.slice. Nov 6 00:26:58.853807 systemd[1]: Created slice kubepods-besteffort-pod5f9595dd_def0_470b_b230_616c1ccc6ebf.slice - libcontainer container kubepods-besteffort-pod5f9595dd_def0_470b_b230_616c1ccc6ebf.slice. Nov 6 00:26:58.892006 containerd[1686]: time="2025-11-06T00:26:58.891828729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981\" id:\"2a52ddbb99d4123dda38446db404a6800b2ac22a911acf925844a6c344e9bac0\" pid:4286 exit_status:1 exited_at:{seconds:1762388818 nanos:891622711}" Nov 6 00:26:58.987054 kubelet[3152]: I1106 00:26:58.987026 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5f9595dd-def0-470b-b230-616c1ccc6ebf-whisker-backend-key-pair\") pod \"whisker-6455d868-gqpdl\" (UID: \"5f9595dd-def0-470b-b230-616c1ccc6ebf\") " pod="calico-system/whisker-6455d868-gqpdl" Nov 6 00:26:58.987297 kubelet[3152]: I1106 00:26:58.987063 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drskh\" (UniqueName: \"kubernetes.io/projected/5f9595dd-def0-470b-b230-616c1ccc6ebf-kube-api-access-drskh\") pod \"whisker-6455d868-gqpdl\" (UID: \"5f9595dd-def0-470b-b230-616c1ccc6ebf\") " pod="calico-system/whisker-6455d868-gqpdl" Nov 6 00:26:58.987297 kubelet[3152]: I1106 00:26:58.987082 3152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f9595dd-def0-470b-b230-616c1ccc6ebf-whisker-ca-bundle\") pod \"whisker-6455d868-gqpdl\" (UID: \"5f9595dd-def0-470b-b230-616c1ccc6ebf\") " pod="calico-system/whisker-6455d868-gqpdl" Nov 6 00:26:59.163789 containerd[1686]: time="2025-11-06T00:26:59.163131538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455d868-gqpdl,Uid:5f9595dd-def0-470b-b230-616c1ccc6ebf,Namespace:calico-system,Attempt:0,}" Nov 6 00:26:59.348444 systemd-networkd[1488]: califf9525038cb: Link UP Nov 6 00:26:59.349830 systemd-networkd[1488]: califf9525038cb: Gained carrier Nov 6 00:26:59.366899 containerd[1686]: 2025-11-06 00:26:59.206 [INFO][4385] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:26:59.366899 containerd[1686]: 2025-11-06 00:26:59.218 [INFO][4385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0 whisker-6455d868- calico-system 5f9595dd-def0-470b-b230-616c1ccc6ebf 892 0 2025-11-06 00:26:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6455d868 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.0-n-3bced53249 whisker-6455d868-gqpdl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califf9525038cb [] [] }} ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Namespace="calico-system" Pod="whisker-6455d868-gqpdl" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-" Nov 6 00:26:59.366899 containerd[1686]: 2025-11-06 00:26:59.218 [INFO][4385] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Namespace="calico-system" Pod="whisker-6455d868-gqpdl" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0" Nov 6 00:26:59.366899 containerd[1686]: 2025-11-06 00:26:59.258 [INFO][4396] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" HandleID="k8s-pod-network.a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Workload="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0" Nov 6 00:26:59.367103 containerd[1686]: 2025-11-06 00:26:59.258 [INFO][4396] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" HandleID="k8s-pod-network.a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Workload="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-3bced53249", "pod":"whisker-6455d868-gqpdl", "timestamp":"2025-11-06 00:26:59.258825155 +0000 UTC"}, Hostname:"ci-4459.1.0-n-3bced53249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:26:59.367103 containerd[1686]: 2025-11-06 00:26:59.258 [INFO][4396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:26:59.367103 containerd[1686]: 2025-11-06 00:26:59.259 [INFO][4396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:26:59.367103 containerd[1686]: 2025-11-06 00:26:59.259 [INFO][4396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-3bced53249' Nov 6 00:26:59.367103 containerd[1686]: 2025-11-06 00:26:59.265 [INFO][4396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" host="ci-4459.1.0-n-3bced53249" Nov 6 00:26:59.367103 containerd[1686]: 2025-11-06 00:26:59.269 [INFO][4396] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-3bced53249" Nov 6 00:26:59.367103 containerd[1686]: 2025-11-06 00:26:59.274 [INFO][4396] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:26:59.367103 containerd[1686]: 2025-11-06 00:26:59.275 [INFO][4396] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:26:59.367103 containerd[1686]: 2025-11-06 00:26:59.277 [INFO][4396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:26:59.367296 containerd[1686]: 2025-11-06 00:26:59.277 [INFO][4396] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" host="ci-4459.1.0-n-3bced53249" Nov 6 00:26:59.367296 containerd[1686]: 2025-11-06 00:26:59.278 [INFO][4396] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808 Nov 6 00:26:59.367296 containerd[1686]: 2025-11-06 00:26:59.298 [INFO][4396] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" host="ci-4459.1.0-n-3bced53249" Nov 6 00:26:59.367296 containerd[1686]: 2025-11-06 00:26:59.306 [INFO][4396] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.1/26] block=192.168.35.0/26 handle="k8s-pod-network.a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" host="ci-4459.1.0-n-3bced53249" Nov 6 00:26:59.367296 containerd[1686]: 2025-11-06 00:26:59.306 [INFO][4396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.1/26] handle="k8s-pod-network.a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" host="ci-4459.1.0-n-3bced53249" Nov 6 00:26:59.367296 containerd[1686]: 2025-11-06 00:26:59.306 [INFO][4396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:26:59.367296 containerd[1686]: 2025-11-06 00:26:59.306 [INFO][4396] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.1/26] IPv6=[] ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" HandleID="k8s-pod-network.a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Workload="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0" Nov 6 00:26:59.367440 containerd[1686]: 2025-11-06 00:26:59.311 [INFO][4385] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Namespace="calico-system" Pod="whisker-6455d868-gqpdl" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0", GenerateName:"whisker-6455d868-", Namespace:"calico-system", SelfLink:"", UID:"5f9595dd-def0-470b-b230-616c1ccc6ebf", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6455d868", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"", Pod:"whisker-6455d868-gqpdl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califf9525038cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:26:59.367440 containerd[1686]: 2025-11-06 00:26:59.311 [INFO][4385] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.1/32] ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Namespace="calico-system" Pod="whisker-6455d868-gqpdl" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0" Nov 6 00:26:59.367521 containerd[1686]: 2025-11-06 00:26:59.311 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf9525038cb ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Namespace="calico-system" Pod="whisker-6455d868-gqpdl" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0" Nov 6 00:26:59.367521 containerd[1686]: 2025-11-06 00:26:59.350 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Namespace="calico-system" Pod="whisker-6455d868-gqpdl" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0" Nov 6 00:26:59.367565 containerd[1686]: 2025-11-06 00:26:59.351 [INFO][4385] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Namespace="calico-system" Pod="whisker-6455d868-gqpdl" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0", GenerateName:"whisker-6455d868-", Namespace:"calico-system", SelfLink:"", UID:"5f9595dd-def0-470b-b230-616c1ccc6ebf", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6455d868", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808", Pod:"whisker-6455d868-gqpdl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califf9525038cb", MAC:"f2:97:f1:60:5f:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:26:59.367618 containerd[1686]: 2025-11-06 00:26:59.362 [INFO][4385] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" Namespace="calico-system" Pod="whisker-6455d868-gqpdl" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-whisker--6455d868--gqpdl-eth0" Nov 6 00:26:59.399762 containerd[1686]: time="2025-11-06T00:26:59.399715151Z" level=info msg="connecting to shim a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808" address="unix:///run/containerd/s/6be60f21525352a4eca083857cb5fe8dea95da200feff60e2f568a1214a82bd8" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:26:59.419992 systemd[1]: Started cri-containerd-a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808.scope - libcontainer container a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808. Nov 6 00:26:59.455139 containerd[1686]: time="2025-11-06T00:26:59.455085392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6455d868-gqpdl,Uid:5f9595dd-def0-470b-b230-616c1ccc6ebf,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4f7588804eae37323e75e42e207c62ab5fd813dec873998d6ddebb2386f3808\"" Nov 6 00:26:59.456942 containerd[1686]: time="2025-11-06T00:26:59.456922142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:26:59.697290 containerd[1686]: time="2025-11-06T00:26:59.697237001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:59.700473 containerd[1686]: time="2025-11-06T00:26:59.700445716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:26:59.700553 containerd[1686]: time="2025-11-06T00:26:59.700449025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:26:59.700685 kubelet[3152]: E1106 00:26:59.700654 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:26:59.700750 kubelet[3152]: E1106 00:26:59.700701 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:26:59.700799 kubelet[3152]: E1106 00:26:59.700778 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6455d868-gqpdl_calico-system(5f9595dd-def0-470b-b230-616c1ccc6ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:59.702121 containerd[1686]: time="2025-11-06T00:26:59.702091134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:26:59.945039 containerd[1686]: time="2025-11-06T00:26:59.945002529Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:59.947631 containerd[1686]: time="2025-11-06T00:26:59.947318341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:26:59.947631 containerd[1686]: time="2025-11-06T00:26:59.947348888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:26:59.947730 kubelet[3152]: E1106 00:26:59.947499 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:26:59.947730 kubelet[3152]: E1106 00:26:59.947537 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:26:59.947730 kubelet[3152]: E1106 00:26:59.947606 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6455d868-gqpdl_calico-system(5f9595dd-def0-470b-b230-616c1ccc6ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:59.947812 kubelet[3152]: E1106 00:26:59.947648 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:26:59.968822 kubelet[3152]: I1106 00:26:59.968771 3152 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:27:00.628759 kubelet[3152]: I1106 00:27:00.628726 3152 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d2f27f2-55bd-484a-8c3a-4032ff9011c3" path="/var/lib/kubelet/pods/3d2f27f2-55bd-484a-8c3a-4032ff9011c3/volumes" Nov 6 00:27:00.779784 kubelet[3152]: E1106 00:27:00.779746 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:27:00.836166 systemd-networkd[1488]: vxlan.calico: Link UP Nov 6 00:27:00.836176 systemd-networkd[1488]: vxlan.calico: Gained carrier Nov 6 00:27:01.249009 systemd-networkd[1488]: califf9525038cb: Gained IPv6LL Nov 6 00:27:01.889056 systemd-networkd[1488]: vxlan.calico: Gained IPv6LL Nov 6 00:27:02.632402 containerd[1686]: time="2025-11-06T00:27:02.632354455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559b785f9-n2pht,Uid:c9883439-5e85-428a-8c5e-1baa916caf76,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:27:02.720461 systemd-networkd[1488]: cali2e08c946466: Link UP Nov 6 00:27:02.720617 systemd-networkd[1488]: cali2e08c946466: Gained carrier Nov 6 00:27:02.733340 containerd[1686]: 2025-11-06 00:27:02.665 [INFO][4595] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0 calico-apiserver-8559b785f9- calico-apiserver c9883439-5e85-428a-8c5e-1baa916caf76 822 0 2025-11-06 00:26:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8559b785f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-3bced53249 calico-apiserver-8559b785f9-n2pht eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e08c946466 [] [] }} ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-n2pht" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-" Nov 6 00:27:02.733340 containerd[1686]: 2025-11-06 00:27:02.665 [INFO][4595] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-n2pht" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0" Nov 6 00:27:02.733340 containerd[1686]: 2025-11-06 00:27:02.685 [INFO][4608] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" HandleID="k8s-pod-network.3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0" Nov 6 00:27:02.733524 containerd[1686]: 2025-11-06 00:27:02.685 [INFO][4608] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" HandleID="k8s-pod-network.3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-3bced53249", "pod":"calico-apiserver-8559b785f9-n2pht", "timestamp":"2025-11-06 00:27:02.685105491 +0000 UTC"}, Hostname:"ci-4459.1.0-n-3bced53249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:27:02.733524 containerd[1686]: 2025-11-06 00:27:02.685 [INFO][4608] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:27:02.733524 containerd[1686]: 2025-11-06 00:27:02.685 [INFO][4608] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:27:02.733524 containerd[1686]: 2025-11-06 00:27:02.685 [INFO][4608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-3bced53249' Nov 6 00:27:02.733524 containerd[1686]: 2025-11-06 00:27:02.690 [INFO][4608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:02.733524 containerd[1686]: 2025-11-06 00:27:02.694 [INFO][4608] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:02.733524 containerd[1686]: 2025-11-06 00:27:02.698 [INFO][4608] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:02.733524 containerd[1686]: 2025-11-06 00:27:02.699 [INFO][4608] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:02.733524 containerd[1686]: 2025-11-06 00:27:02.701 [INFO][4608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:02.734042 containerd[1686]: 2025-11-06 00:27:02.701 [INFO][4608] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:02.734042 containerd[1686]: 2025-11-06 00:27:02.702 [INFO][4608] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51 Nov 6 00:27:02.734042 containerd[1686]: 2025-11-06 00:27:02.706 [INFO][4608] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:02.734042 containerd[1686]: 2025-11-06 00:27:02.716 [INFO][4608] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.2/26] block=192.168.35.0/26 handle="k8s-pod-network.3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:02.734042 containerd[1686]: 2025-11-06 00:27:02.716 [INFO][4608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.2/26] handle="k8s-pod-network.3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:02.734042 containerd[1686]: 2025-11-06 00:27:02.716 [INFO][4608] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:27:02.734042 containerd[1686]: 2025-11-06 00:27:02.716 [INFO][4608] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.2/26] IPv6=[] ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" HandleID="k8s-pod-network.3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0" Nov 6 00:27:02.734306 containerd[1686]: 2025-11-06 00:27:02.717 [INFO][4595] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-n2pht" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0", GenerateName:"calico-apiserver-8559b785f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9883439-5e85-428a-8c5e-1baa916caf76", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8559b785f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"", Pod:"calico-apiserver-8559b785f9-n2pht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e08c946466", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:02.734399 containerd[1686]: 2025-11-06 00:27:02.718 [INFO][4595] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.2/32] ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-n2pht" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0" Nov 6 00:27:02.734399 containerd[1686]: 2025-11-06 00:27:02.718 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e08c946466 ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-n2pht" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0" Nov 6 00:27:02.734399 containerd[1686]: 2025-11-06 00:27:02.719 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-n2pht" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0" Nov 6 00:27:02.734663 containerd[1686]: 2025-11-06 00:27:02.720 [INFO][4595] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-n2pht" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0", GenerateName:"calico-apiserver-8559b785f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9883439-5e85-428a-8c5e-1baa916caf76", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8559b785f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51", Pod:"calico-apiserver-8559b785f9-n2pht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e08c946466", MAC:"8a:85:e9:3a:69:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:02.734756 containerd[1686]: 2025-11-06 00:27:02.730 [INFO][4595] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-n2pht" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--n2pht-eth0" Nov 6 00:27:02.779091 containerd[1686]: time="2025-11-06T00:27:02.779057989Z" level=info msg="connecting to shim 3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51" address="unix:///run/containerd/s/5cb07200fa975b50d42d9a8661b03d249393e1fc619a0651ff82d753ab6aed4e" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:02.805022 systemd[1]: Started cri-containerd-3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51.scope - libcontainer container 3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51. Nov 6 00:27:02.841163 containerd[1686]: time="2025-11-06T00:27:02.841128064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559b785f9-n2pht,Uid:c9883439-5e85-428a-8c5e-1baa916caf76,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3146bce8e59950bc3d337d74d06f45d3c5918b7c9e2be2a019795b9b66458b51\"" Nov 6 00:27:02.842532 containerd[1686]: time="2025-11-06T00:27:02.842352881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:27:03.080306 containerd[1686]: time="2025-11-06T00:27:03.080250057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:03.082754 containerd[1686]: time="2025-11-06T00:27:03.082726086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:27:03.082825 containerd[1686]: time="2025-11-06T00:27:03.082802641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:03.082978 kubelet[3152]: E1106 00:27:03.082943 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:03.083272 kubelet[3152]: E1106 00:27:03.082990 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:03.083272 kubelet[3152]: E1106 00:27:03.083070 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559b785f9-n2pht_calico-apiserver(c9883439-5e85-428a-8c5e-1baa916caf76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:03.083272 kubelet[3152]: E1106 00:27:03.083102 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:27:03.632475 containerd[1686]: time="2025-11-06T00:27:03.632437698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cdd9bb7bc-2b49s,Uid:62834f18-0344-4626-bcdf-b650cdc6187d,Namespace:calico-system,Attempt:0,}" Nov 6 00:27:03.727354 systemd-networkd[1488]: calic3fbb47751d: Link UP Nov 6 00:27:03.729187 systemd-networkd[1488]: calic3fbb47751d: Gained carrier Nov 6 00:27:03.746025 containerd[1686]: 2025-11-06 00:27:03.672 [INFO][4669] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0 calico-kube-controllers-cdd9bb7bc- calico-system 62834f18-0344-4626-bcdf-b650cdc6187d 831 0 2025-11-06 00:26:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cdd9bb7bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.0-n-3bced53249 calico-kube-controllers-cdd9bb7bc-2b49s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic3fbb47751d [] [] }} ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Namespace="calico-system" Pod="calico-kube-controllers-cdd9bb7bc-2b49s" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-" Nov 6 00:27:03.746025 containerd[1686]: 2025-11-06 00:27:03.672 [INFO][4669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Namespace="calico-system" Pod="calico-kube-controllers-cdd9bb7bc-2b49s" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0" Nov 6 00:27:03.746025 containerd[1686]: 2025-11-06 00:27:03.694 [INFO][4681] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" HandleID="k8s-pod-network.b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0" Nov 6 00:27:03.746216 containerd[1686]: 2025-11-06 00:27:03.694 [INFO][4681] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" HandleID="k8s-pod-network.b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-3bced53249", "pod":"calico-kube-controllers-cdd9bb7bc-2b49s", "timestamp":"2025-11-06 00:27:03.694115368 +0000 UTC"}, Hostname:"ci-4459.1.0-n-3bced53249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:27:03.746216 containerd[1686]: 2025-11-06 00:27:03.694 [INFO][4681] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:27:03.746216 containerd[1686]: 2025-11-06 00:27:03.694 [INFO][4681] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:27:03.746216 containerd[1686]: 2025-11-06 00:27:03.694 [INFO][4681] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-3bced53249' Nov 6 00:27:03.746216 containerd[1686]: 2025-11-06 00:27:03.699 [INFO][4681] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:03.746216 containerd[1686]: 2025-11-06 00:27:03.702 [INFO][4681] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:03.746216 containerd[1686]: 2025-11-06 00:27:03.705 [INFO][4681] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:03.746216 containerd[1686]: 2025-11-06 00:27:03.706 [INFO][4681] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:03.746216 containerd[1686]: 2025-11-06 00:27:03.708 [INFO][4681] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:03.746406 containerd[1686]: 2025-11-06 00:27:03.708 [INFO][4681] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:03.746406 containerd[1686]: 2025-11-06 00:27:03.709 [INFO][4681] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa Nov 6 00:27:03.746406 containerd[1686]: 2025-11-06 00:27:03.714 [INFO][4681] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:03.746406 containerd[1686]: 2025-11-06 00:27:03.722 [INFO][4681] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.3/26] block=192.168.35.0/26 handle="k8s-pod-network.b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:03.746406 containerd[1686]: 2025-11-06 00:27:03.722 [INFO][4681] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.3/26] handle="k8s-pod-network.b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:03.746406 containerd[1686]: 2025-11-06 00:27:03.722 [INFO][4681] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:27:03.746406 containerd[1686]: 2025-11-06 00:27:03.722 [INFO][4681] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.3/26] IPv6=[] ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" HandleID="k8s-pod-network.b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0" Nov 6 00:27:03.746538 containerd[1686]: 2025-11-06 00:27:03.724 [INFO][4669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Namespace="calico-system" Pod="calico-kube-controllers-cdd9bb7bc-2b49s" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0", GenerateName:"calico-kube-controllers-cdd9bb7bc-", Namespace:"calico-system", SelfLink:"", UID:"62834f18-0344-4626-bcdf-b650cdc6187d", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cdd9bb7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"", Pod:"calico-kube-controllers-cdd9bb7bc-2b49s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3fbb47751d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:03.746594 containerd[1686]: 2025-11-06 00:27:03.724 [INFO][4669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.3/32] ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Namespace="calico-system" Pod="calico-kube-controllers-cdd9bb7bc-2b49s" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0" Nov 6 00:27:03.746594 containerd[1686]: 2025-11-06 00:27:03.724 [INFO][4669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3fbb47751d ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Namespace="calico-system" Pod="calico-kube-controllers-cdd9bb7bc-2b49s" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0" Nov 6 00:27:03.746594 containerd[1686]: 2025-11-06 00:27:03.728 [INFO][4669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Namespace="calico-system" Pod="calico-kube-controllers-cdd9bb7bc-2b49s" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0" Nov 6 00:27:03.746655 containerd[1686]: 2025-11-06 00:27:03.728 [INFO][4669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Namespace="calico-system" Pod="calico-kube-controllers-cdd9bb7bc-2b49s" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0", GenerateName:"calico-kube-controllers-cdd9bb7bc-", Namespace:"calico-system", SelfLink:"", UID:"62834f18-0344-4626-bcdf-b650cdc6187d", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cdd9bb7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa", Pod:"calico-kube-controllers-cdd9bb7bc-2b49s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3fbb47751d", MAC:"8e:11:ea:31:0c:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:03.746706 containerd[1686]: 2025-11-06 00:27:03.743 [INFO][4669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" Namespace="calico-system" Pod="calico-kube-controllers-cdd9bb7bc-2b49s" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--kube--controllers--cdd9bb7bc--2b49s-eth0" Nov 6 00:27:03.784312 kubelet[3152]: E1106 00:27:03.784084 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:27:03.814593 containerd[1686]: time="2025-11-06T00:27:03.813984388Z" level=info msg="connecting to shim b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa" address="unix:///run/containerd/s/89fb907b8511b8ee32d57d94038b7b0619a1d01860c91e62e82909ef62a92fd1" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:03.843089 systemd[1]: Started cri-containerd-b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa.scope - libcontainer container b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa. Nov 6 00:27:03.882039 containerd[1686]: time="2025-11-06T00:27:03.882016505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cdd9bb7bc-2b49s,Uid:62834f18-0344-4626-bcdf-b650cdc6187d,Namespace:calico-system,Attempt:0,} returns sandbox id \"b323821efb3adbcc77dbd0cdc4156b02d06ee572e119ad88e1dda4d3fe0be4fa\"" Nov 6 00:27:03.883436 containerd[1686]: time="2025-11-06T00:27:03.883235717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:27:04.121134 containerd[1686]: time="2025-11-06T00:27:04.121089106Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:04.126069 containerd[1686]: time="2025-11-06T00:27:04.126043229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:27:04.126137 containerd[1686]: time="2025-11-06T00:27:04.126117433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:27:04.126313 kubelet[3152]: E1106 00:27:04.126272 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:27:04.126609 kubelet[3152]: E1106 00:27:04.126318 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:27:04.126609 kubelet[3152]: E1106 00:27:04.126394 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-cdd9bb7bc-2b49s_calico-system(62834f18-0344-4626-bcdf-b650cdc6187d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:04.126609 kubelet[3152]: E1106 00:27:04.126427 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:27:04.257086 systemd-networkd[1488]: cali2e08c946466: Gained IPv6LL Nov 6 00:27:04.632437 containerd[1686]: time="2025-11-06T00:27:04.632332549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-92pgk,Uid:e48dba36-29c2-4eef-9ba0-2dd98198c6d2,Namespace:kube-system,Attempt:0,}" Nov 6 00:27:04.639239 containerd[1686]: time="2025-11-06T00:27:04.639036603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4j9vt,Uid:9c757a1d-95f3-4cbd-9adf-b65065b2eb8c,Namespace:calico-system,Attempt:0,}" Nov 6 00:27:04.753540 systemd-networkd[1488]: cali60c6df76462: Link UP Nov 6 00:27:04.753660 systemd-networkd[1488]: cali60c6df76462: Gained carrier Nov 6 00:27:04.767494 containerd[1686]: 2025-11-06 00:27:04.686 [INFO][4742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0 coredns-66bc5c9577- kube-system e48dba36-29c2-4eef-9ba0-2dd98198c6d2 828 0 2025-11-06 00:26:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-3bced53249 coredns-66bc5c9577-92pgk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali60c6df76462 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Namespace="kube-system" Pod="coredns-66bc5c9577-92pgk" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-" Nov 6 00:27:04.767494 containerd[1686]: 2025-11-06 00:27:04.686 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Namespace="kube-system" Pod="coredns-66bc5c9577-92pgk" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0" Nov 6 00:27:04.767494 containerd[1686]: 2025-11-06 00:27:04.717 [INFO][4769] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" HandleID="k8s-pod-network.9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Workload="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0" Nov 6 00:27:04.768233 containerd[1686]: 2025-11-06 00:27:04.719 [INFO][4769] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" HandleID="k8s-pod-network.9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Workload="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f160), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-3bced53249", "pod":"coredns-66bc5c9577-92pgk", "timestamp":"2025-11-06 00:27:04.717440915 +0000 UTC"}, Hostname:"ci-4459.1.0-n-3bced53249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:27:04.768233 containerd[1686]: 2025-11-06 00:27:04.720 [INFO][4769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:27:04.768233 containerd[1686]: 2025-11-06 00:27:04.720 [INFO][4769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:27:04.768233 containerd[1686]: 2025-11-06 00:27:04.720 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-3bced53249' Nov 6 00:27:04.768233 containerd[1686]: 2025-11-06 00:27:04.724 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.768233 containerd[1686]: 2025-11-06 00:27:04.727 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.768233 containerd[1686]: 2025-11-06 00:27:04.729 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.768233 containerd[1686]: 2025-11-06 00:27:04.731 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.768233 containerd[1686]: 2025-11-06 00:27:04.732 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.768551 containerd[1686]: 2025-11-06 00:27:04.732 [INFO][4769] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.768551 containerd[1686]: 2025-11-06 00:27:04.733 [INFO][4769] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb Nov 6 00:27:04.768551 containerd[1686]: 2025-11-06 00:27:04.742 [INFO][4769] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.768551 containerd[1686]: 2025-11-06 00:27:04.749 [INFO][4769] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.4/26] block=192.168.35.0/26 handle="k8s-pod-network.9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.768551 containerd[1686]: 2025-11-06 00:27:04.749 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.4/26] handle="k8s-pod-network.9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.768551 containerd[1686]: 2025-11-06 00:27:04.749 [INFO][4769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:27:04.768551 containerd[1686]: 2025-11-06 00:27:04.749 [INFO][4769] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.4/26] IPv6=[] ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" HandleID="k8s-pod-network.9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Workload="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0" Nov 6 00:27:04.768658 containerd[1686]: 2025-11-06 00:27:04.751 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Namespace="kube-system" Pod="coredns-66bc5c9577-92pgk" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e48dba36-29c2-4eef-9ba0-2dd98198c6d2", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"", Pod:"coredns-66bc5c9577-92pgk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60c6df76462", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:04.768658 containerd[1686]: 2025-11-06 00:27:04.751 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.4/32] ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Namespace="kube-system" Pod="coredns-66bc5c9577-92pgk" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0" Nov 6 00:27:04.768658 containerd[1686]: 2025-11-06 00:27:04.751 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60c6df76462 ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Namespace="kube-system" Pod="coredns-66bc5c9577-92pgk" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0" Nov 6 00:27:04.768658 containerd[1686]: 2025-11-06 00:27:04.753 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Namespace="kube-system" Pod="coredns-66bc5c9577-92pgk" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0" Nov 6 00:27:04.768658 containerd[1686]: 2025-11-06 00:27:04.753 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Namespace="kube-system" Pod="coredns-66bc5c9577-92pgk" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e48dba36-29c2-4eef-9ba0-2dd98198c6d2", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb", Pod:"coredns-66bc5c9577-92pgk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60c6df76462", MAC:"02:4f:50:f4:5b:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:04.768812 containerd[1686]: 2025-11-06 00:27:04.765 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" Namespace="kube-system" Pod="coredns-66bc5c9577-92pgk" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--92pgk-eth0" Nov 6 00:27:04.786944 kubelet[3152]: E1106 00:27:04.786690 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:27:04.792862 kubelet[3152]: E1106 00:27:04.787387 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:27:04.832566 containerd[1686]: time="2025-11-06T00:27:04.832504512Z" level=info msg="connecting to shim 9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb" address="unix:///run/containerd/s/92c7a0d3a00dd453634977bf55a75c8377d92b61f638c0b07c6c4adc8511f0d1" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:04.860054 systemd[1]: Started cri-containerd-9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb.scope - libcontainer container 9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb. Nov 6 00:27:04.872104 systemd-networkd[1488]: cali45087bb3ec5: Link UP Nov 6 00:27:04.872726 systemd-networkd[1488]: cali45087bb3ec5: Gained carrier Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.686 [INFO][4746] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0 csi-node-driver- calico-system 9c757a1d-95f3-4cbd-9adf-b65065b2eb8c 713 0 2025-11-06 00:26:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.0-n-3bced53249 csi-node-driver-4j9vt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali45087bb3ec5 [] [] }} ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Namespace="calico-system" Pod="csi-node-driver-4j9vt" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.686 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Namespace="calico-system" Pod="csi-node-driver-4j9vt" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.721 [INFO][4774] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" HandleID="k8s-pod-network.147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Workload="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.721 [INFO][4774] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" HandleID="k8s-pod-network.147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Workload="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-3bced53249", "pod":"csi-node-driver-4j9vt", "timestamp":"2025-11-06 00:27:04.721296316 +0000 UTC"}, Hostname:"ci-4459.1.0-n-3bced53249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.721 [INFO][4774] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.749 [INFO][4774] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.749 [INFO][4774] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-3bced53249' Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.826 [INFO][4774] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.830 [INFO][4774] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.834 [INFO][4774] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.835 [INFO][4774] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.837 [INFO][4774] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.837 [INFO][4774] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.839 [INFO][4774] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2 Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.857 [INFO][4774] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.866 [INFO][4774] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.5/26] block=192.168.35.0/26 handle="k8s-pod-network.147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.866 [INFO][4774] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.5/26] handle="k8s-pod-network.147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.866 [INFO][4774] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:27:04.889274 containerd[1686]: 2025-11-06 00:27:04.866 [INFO][4774] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.5/26] IPv6=[] ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" HandleID="k8s-pod-network.147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Workload="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0" Nov 6 00:27:04.889736 containerd[1686]: 2025-11-06 00:27:04.869 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Namespace="calico-system" Pod="csi-node-driver-4j9vt" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c757a1d-95f3-4cbd-9adf-b65065b2eb8c", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"", Pod:"csi-node-driver-4j9vt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali45087bb3ec5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:04.889736 containerd[1686]: 2025-11-06 00:27:04.869 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.5/32] ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Namespace="calico-system" Pod="csi-node-driver-4j9vt" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0" Nov 6 00:27:04.889736 containerd[1686]: 2025-11-06 00:27:04.869 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45087bb3ec5 ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Namespace="calico-system" Pod="csi-node-driver-4j9vt" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0" Nov 6 00:27:04.889736 containerd[1686]: 2025-11-06 00:27:04.873 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Namespace="calico-system" Pod="csi-node-driver-4j9vt" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0" Nov 6 00:27:04.889736 containerd[1686]: 2025-11-06 00:27:04.873 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Namespace="calico-system" Pod="csi-node-driver-4j9vt" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c757a1d-95f3-4cbd-9adf-b65065b2eb8c", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2", Pod:"csi-node-driver-4j9vt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali45087bb3ec5", MAC:"02:eb:48:d9:6e:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:04.889736 containerd[1686]: 2025-11-06 00:27:04.885 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" Namespace="calico-system" Pod="csi-node-driver-4j9vt" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-csi--node--driver--4j9vt-eth0" Nov 6 00:27:04.927967 containerd[1686]: time="2025-11-06T00:27:04.927936550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-92pgk,Uid:e48dba36-29c2-4eef-9ba0-2dd98198c6d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb\"" Nov 6 00:27:04.934671 containerd[1686]: time="2025-11-06T00:27:04.934568855Z" level=info msg="connecting to shim 147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2" address="unix:///run/containerd/s/fe45797d2ede9a4146e5290e3006450f507d3cfb390594f1733df57861a53f3c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:04.942365 containerd[1686]: time="2025-11-06T00:27:04.942293351Z" level=info msg="CreateContainer within sandbox \"9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:27:04.955035 systemd[1]: Started cri-containerd-147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2.scope - libcontainer container 147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2. Nov 6 00:27:04.963981 containerd[1686]: time="2025-11-06T00:27:04.963952664Z" level=info msg="Container 308b2b845c9736df4249abc27ffff972c3fddc4b964f39c3161e761fceaf070f: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:04.976929 containerd[1686]: time="2025-11-06T00:27:04.976859924Z" level=info msg="CreateContainer within sandbox \"9dcef4393b40b49cb27d815936f21bf21e97f6cb467e5ee2a911d335379a96eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"308b2b845c9736df4249abc27ffff972c3fddc4b964f39c3161e761fceaf070f\"" Nov 6 00:27:04.978075 containerd[1686]: time="2025-11-06T00:27:04.978053189Z" level=info msg="StartContainer for \"308b2b845c9736df4249abc27ffff972c3fddc4b964f39c3161e761fceaf070f\"" Nov 6 00:27:04.979161 containerd[1686]: time="2025-11-06T00:27:04.978778428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4j9vt,Uid:9c757a1d-95f3-4cbd-9adf-b65065b2eb8c,Namespace:calico-system,Attempt:0,} returns sandbox id \"147d5a8f76fdb1bba02aff1533d043cc1e392d8aa7c701ba9f92ade7640107b2\"" Nov 6 00:27:04.979366 containerd[1686]: time="2025-11-06T00:27:04.979283351Z" level=info msg="connecting to shim 308b2b845c9736df4249abc27ffff972c3fddc4b964f39c3161e761fceaf070f" address="unix:///run/containerd/s/92c7a0d3a00dd453634977bf55a75c8377d92b61f638c0b07c6c4adc8511f0d1" protocol=ttrpc version=3 Nov 6 00:27:04.980147 containerd[1686]: time="2025-11-06T00:27:04.980121544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:27:04.997009 systemd[1]: Started cri-containerd-308b2b845c9736df4249abc27ffff972c3fddc4b964f39c3161e761fceaf070f.scope - libcontainer container 308b2b845c9736df4249abc27ffff972c3fddc4b964f39c3161e761fceaf070f. Nov 6 00:27:05.022780 containerd[1686]: time="2025-11-06T00:27:05.022748701Z" level=info msg="StartContainer for \"308b2b845c9736df4249abc27ffff972c3fddc4b964f39c3161e761fceaf070f\" returns successfully" Nov 6 00:27:05.224770 containerd[1686]: time="2025-11-06T00:27:05.224733199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:05.227419 containerd[1686]: time="2025-11-06T00:27:05.227392695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:27:05.227608 containerd[1686]: time="2025-11-06T00:27:05.227408532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:27:05.227677 kubelet[3152]: E1106 00:27:05.227636 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:27:05.228028 kubelet[3152]: E1106 00:27:05.227689 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:27:05.228028 kubelet[3152]: E1106 00:27:05.227763 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4j9vt_calico-system(9c757a1d-95f3-4cbd-9adf-b65065b2eb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:05.229980 containerd[1686]: time="2025-11-06T00:27:05.229953950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:27:05.475440 containerd[1686]: time="2025-11-06T00:27:05.475320558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:05.477902 containerd[1686]: time="2025-11-06T00:27:05.477845452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:27:05.478001 containerd[1686]: time="2025-11-06T00:27:05.477863560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:27:05.478181 kubelet[3152]: E1106 00:27:05.478143 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:27:05.478228 kubelet[3152]: E1106 00:27:05.478192 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:27:05.478297 kubelet[3152]: E1106 00:27:05.478270 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4j9vt_calico-system(9c757a1d-95f3-4cbd-9adf-b65065b2eb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:05.478425 kubelet[3152]: E1106 00:27:05.478323 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:27:05.631759 containerd[1686]: time="2025-11-06T00:27:05.631721755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559b785f9-fhvmw,Uid:c86a3ffd-cf2f-4e08-9736-f5e39ae366f1,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:27:05.715551 systemd-networkd[1488]: calie4421d19dd9: Link UP Nov 6 00:27:05.715738 systemd-networkd[1488]: calie4421d19dd9: Gained carrier Nov 6 00:27:05.728972 systemd-networkd[1488]: calic3fbb47751d: Gained IPv6LL Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.662 [INFO][4930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0 calico-apiserver-8559b785f9- calico-apiserver c86a3ffd-cf2f-4e08-9736-f5e39ae366f1 823 0 2025-11-06 00:26:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8559b785f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-3bced53249 calico-apiserver-8559b785f9-fhvmw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie4421d19dd9 [] [] }} ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-fhvmw" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.662 [INFO][4930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-fhvmw" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.682 [INFO][4942] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" HandleID="k8s-pod-network.a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.682 [INFO][4942] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" HandleID="k8s-pod-network.a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-3bced53249", "pod":"calico-apiserver-8559b785f9-fhvmw", "timestamp":"2025-11-06 00:27:05.682519432 +0000 UTC"}, Hostname:"ci-4459.1.0-n-3bced53249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.682 [INFO][4942] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.682 [INFO][4942] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.682 [INFO][4942] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-3bced53249' Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.687 [INFO][4942] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.691 [INFO][4942] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.694 [INFO][4942] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.695 [INFO][4942] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.697 [INFO][4942] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.697 [INFO][4942] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.698 [INFO][4942] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.701 [INFO][4942] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.710 [INFO][4942] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.6/26] block=192.168.35.0/26 handle="k8s-pod-network.a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.710 [INFO][4942] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.6/26] handle="k8s-pod-network.a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.710 [INFO][4942] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:27:05.734163 containerd[1686]: 2025-11-06 00:27:05.710 [INFO][4942] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.6/26] IPv6=[] ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" HandleID="k8s-pod-network.a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0" Nov 6 00:27:05.735399 containerd[1686]: 2025-11-06 00:27:05.711 [INFO][4930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-fhvmw" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0", GenerateName:"calico-apiserver-8559b785f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c86a3ffd-cf2f-4e08-9736-f5e39ae366f1", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8559b785f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"", Pod:"calico-apiserver-8559b785f9-fhvmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4421d19dd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:05.735399 containerd[1686]: 2025-11-06 00:27:05.711 [INFO][4930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.6/32] ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-fhvmw" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0" Nov 6 00:27:05.735399 containerd[1686]: 2025-11-06 00:27:05.711 [INFO][4930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4421d19dd9 ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-fhvmw" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0" Nov 6 00:27:05.735399 containerd[1686]: 2025-11-06 00:27:05.718 [INFO][4930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-fhvmw" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0" Nov 6 00:27:05.735399 containerd[1686]: 2025-11-06 00:27:05.721 [INFO][4930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-fhvmw" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0", GenerateName:"calico-apiserver-8559b785f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c86a3ffd-cf2f-4e08-9736-f5e39ae366f1", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8559b785f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d", Pod:"calico-apiserver-8559b785f9-fhvmw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4421d19dd9", MAC:"fe:fe:2f:9f:7f:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:05.735399 containerd[1686]: 2025-11-06 00:27:05.731 [INFO][4930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" Namespace="calico-apiserver" Pod="calico-apiserver-8559b785f9-fhvmw" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--8559b785f9--fhvmw-eth0" Nov 6 00:27:05.769311 containerd[1686]: time="2025-11-06T00:27:05.769279046Z" level=info msg="connecting to shim a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d" address="unix:///run/containerd/s/6a126572c6ae8ffeccc89956252af844d18a32a008fec57bd64b6cd153c7afb2" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:05.787142 systemd[1]: Started cri-containerd-a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d.scope - libcontainer container a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d. Nov 6 00:27:05.794691 kubelet[3152]: E1106 00:27:05.794653 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:27:05.797352 kubelet[3152]: E1106 00:27:05.797329 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:27:05.844219 kubelet[3152]: I1106 00:27:05.844145 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-92pgk" podStartSLOduration=39.844130104 podStartE2EDuration="39.844130104s" podCreationTimestamp="2025-11-06 00:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:27:05.823985237 +0000 UTC m=+45.285212616" watchObservedRunningTime="2025-11-06 00:27:05.844130104 +0000 UTC m=+45.305357485" Nov 6 00:27:05.872895 containerd[1686]: time="2025-11-06T00:27:05.872621123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559b785f9-fhvmw,Uid:c86a3ffd-cf2f-4e08-9736-f5e39ae366f1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a688f8cf4b43c8e5bb2b66b1b73ca048cfa119c8c6d119387fb17fbab6a8112d\"" Nov 6 00:27:05.873927 containerd[1686]: time="2025-11-06T00:27:05.873832003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:27:06.116646 containerd[1686]: time="2025-11-06T00:27:06.116510155Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:06.119036 containerd[1686]: time="2025-11-06T00:27:06.118997080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:27:06.119152 containerd[1686]: time="2025-11-06T00:27:06.119004333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:06.119273 kubelet[3152]: E1106 00:27:06.119228 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:06.119314 kubelet[3152]: E1106 00:27:06.119275 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:06.119384 kubelet[3152]: E1106 00:27:06.119356 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559b785f9-fhvmw_calico-apiserver(c86a3ffd-cf2f-4e08-9736-f5e39ae366f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:06.119418 kubelet[3152]: E1106 00:27:06.119402 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:27:06.305142 systemd-networkd[1488]: cali45087bb3ec5: Gained IPv6LL Nov 6 00:27:06.497021 systemd-networkd[1488]: cali60c6df76462: Gained IPv6LL Nov 6 00:27:06.634348 containerd[1686]: time="2025-11-06T00:27:06.634125142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fhgxb,Uid:fbb02403-83ba-4851-9c6d-3c3f92019d78,Namespace:kube-system,Attempt:0,}" Nov 6 00:27:06.638613 containerd[1686]: time="2025-11-06T00:27:06.638572282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2k7xr,Uid:4902c4e4-3977-4e0d-b87b-89acc6926de6,Namespace:calico-system,Attempt:0,}" Nov 6 00:27:06.654555 containerd[1686]: time="2025-11-06T00:27:06.654528610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546975659-nxnh9,Uid:797f66e1-c3e4-4d4b-8032-18c5d22ec25c,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:27:06.807940 kubelet[3152]: E1106 00:27:06.807497 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:27:06.811158 kubelet[3152]: E1106 00:27:06.810774 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:27:06.887159 systemd-networkd[1488]: calidaa17cd4382: Link UP Nov 6 00:27:06.888242 systemd-networkd[1488]: calidaa17cd4382: Gained carrier Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.752 [INFO][5014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0 coredns-66bc5c9577- kube-system fbb02403-83ba-4851-9c6d-3c3f92019d78 829 0 2025-11-06 00:26:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-3bced53249 coredns-66bc5c9577-fhgxb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidaa17cd4382 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Namespace="kube-system" Pod="coredns-66bc5c9577-fhgxb" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.753 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Namespace="kube-system" Pod="coredns-66bc5c9577-fhgxb" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.827 [INFO][5051] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" HandleID="k8s-pod-network.a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Workload="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.827 [INFO][5051] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" HandleID="k8s-pod-network.a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Workload="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-3bced53249", "pod":"coredns-66bc5c9577-fhgxb", "timestamp":"2025-11-06 00:27:06.827140886 +0000 UTC"}, Hostname:"ci-4459.1.0-n-3bced53249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.827 [INFO][5051] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.827 [INFO][5051] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.827 [INFO][5051] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-3bced53249' Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.847 [INFO][5051] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.855 [INFO][5051] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.860 [INFO][5051] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.862 [INFO][5051] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.863 [INFO][5051] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.863 [INFO][5051] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.864 [INFO][5051] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2 Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.869 [INFO][5051] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.877 [INFO][5051] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.7/26] block=192.168.35.0/26 handle="k8s-pod-network.a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.877 [INFO][5051] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.7/26] handle="k8s-pod-network.a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.879 [INFO][5051] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:27:06.901277 containerd[1686]: 2025-11-06 00:27:06.879 [INFO][5051] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.7/26] IPv6=[] ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" HandleID="k8s-pod-network.a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Workload="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0" Nov 6 00:27:06.901992 containerd[1686]: 2025-11-06 00:27:06.882 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Namespace="kube-system" Pod="coredns-66bc5c9577-fhgxb" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbb02403-83ba-4851-9c6d-3c3f92019d78", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"", Pod:"coredns-66bc5c9577-fhgxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidaa17cd4382", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:06.901992 containerd[1686]: 2025-11-06 00:27:06.882 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.7/32] ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Namespace="kube-system" Pod="coredns-66bc5c9577-fhgxb" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0" Nov 6 00:27:06.901992 containerd[1686]: 2025-11-06 00:27:06.882 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaa17cd4382 ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Namespace="kube-system" Pod="coredns-66bc5c9577-fhgxb" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0" Nov 6 00:27:06.901992 containerd[1686]: 2025-11-06 00:27:06.888 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Namespace="kube-system" Pod="coredns-66bc5c9577-fhgxb" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0" Nov 6 00:27:06.901992 containerd[1686]: 2025-11-06 00:27:06.889 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Namespace="kube-system" Pod="coredns-66bc5c9577-fhgxb" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fbb02403-83ba-4851-9c6d-3c3f92019d78", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2", Pod:"coredns-66bc5c9577-fhgxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidaa17cd4382", MAC:"72:90:6b:34:81:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:06.903173 containerd[1686]: 2025-11-06 00:27:06.898 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" Namespace="kube-system" Pod="coredns-66bc5c9577-fhgxb" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-coredns--66bc5c9577--fhgxb-eth0" Nov 6 00:27:06.956903 containerd[1686]: time="2025-11-06T00:27:06.956445192Z" level=info msg="connecting to shim a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2" address="unix:///run/containerd/s/a809ac57ba757d75c8fb1deee7e6aa8ba6c045b05beb7eeb7f1698682d8d4002" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:06.988191 systemd[1]: Started cri-containerd-a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2.scope - libcontainer container a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2. Nov 6 00:27:06.992730 systemd-networkd[1488]: cali4a0a773f779: Link UP Nov 6 00:27:06.995392 systemd-networkd[1488]: cali4a0a773f779: Gained carrier Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.767 [INFO][5026] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0 goldmane-7c778bb748- calico-system 4902c4e4-3977-4e0d-b87b-89acc6926de6 826 0 2025-11-06 00:26:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.0-n-3bced53249 goldmane-7c778bb748-2k7xr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4a0a773f779 [] [] }} ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Namespace="calico-system" Pod="goldmane-7c778bb748-2k7xr" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.767 [INFO][5026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Namespace="calico-system" Pod="goldmane-7c778bb748-2k7xr" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.839 [INFO][5057] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" HandleID="k8s-pod-network.eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Workload="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.841 [INFO][5057] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" HandleID="k8s-pod-network.eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Workload="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-3bced53249", "pod":"goldmane-7c778bb748-2k7xr", "timestamp":"2025-11-06 00:27:06.839497203 +0000 UTC"}, Hostname:"ci-4459.1.0-n-3bced53249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.841 [INFO][5057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.878 [INFO][5057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.878 [INFO][5057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-3bced53249' Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.948 [INFO][5057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.955 [INFO][5057] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.962 [INFO][5057] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.966 [INFO][5057] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.970 [INFO][5057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.971 [INFO][5057] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.972 [INFO][5057] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4 Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.976 [INFO][5057] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.986 [INFO][5057] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.8/26] block=192.168.35.0/26 handle="k8s-pod-network.eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.986 [INFO][5057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.8/26] handle="k8s-pod-network.eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.986 [INFO][5057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:27:07.017503 containerd[1686]: 2025-11-06 00:27:06.986 [INFO][5057] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.8/26] IPv6=[] ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" HandleID="k8s-pod-network.eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Workload="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0" Nov 6 00:27:07.018033 containerd[1686]: 2025-11-06 00:27:06.988 [INFO][5026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Namespace="calico-system" Pod="goldmane-7c778bb748-2k7xr" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4902c4e4-3977-4e0d-b87b-89acc6926de6", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"", Pod:"goldmane-7c778bb748-2k7xr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a0a773f779", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:07.018033 containerd[1686]: 2025-11-06 00:27:06.988 [INFO][5026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.8/32] ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Namespace="calico-system" Pod="goldmane-7c778bb748-2k7xr" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0" Nov 6 00:27:07.018033 containerd[1686]: 2025-11-06 00:27:06.988 [INFO][5026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a0a773f779 ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Namespace="calico-system" Pod="goldmane-7c778bb748-2k7xr" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0" Nov 6 00:27:07.018033 containerd[1686]: 2025-11-06 00:27:06.993 [INFO][5026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Namespace="calico-system" Pod="goldmane-7c778bb748-2k7xr" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0" Nov 6 00:27:07.018033 containerd[1686]: 2025-11-06 00:27:06.993 [INFO][5026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Namespace="calico-system" Pod="goldmane-7c778bb748-2k7xr" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4902c4e4-3977-4e0d-b87b-89acc6926de6", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4", Pod:"goldmane-7c778bb748-2k7xr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a0a773f779", MAC:"46:e6:3d:37:c6:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:07.018033 containerd[1686]: 2025-11-06 00:27:07.013 [INFO][5026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" Namespace="calico-system" Pod="goldmane-7c778bb748-2k7xr" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-goldmane--7c778bb748--2k7xr-eth0" Nov 6 00:27:07.058269 containerd[1686]: time="2025-11-06T00:27:07.058015162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fhgxb,Uid:fbb02403-83ba-4851-9c6d-3c3f92019d78,Namespace:kube-system,Attempt:0,} returns sandbox id \"a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2\"" Nov 6 00:27:07.066325 containerd[1686]: time="2025-11-06T00:27:07.066263674Z" level=info msg="connecting to shim eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4" address="unix:///run/containerd/s/92619615573e7df74e459584e2fe066072f86c750056eae18cefd56806acef73" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:07.069192 containerd[1686]: time="2025-11-06T00:27:07.069088300Z" level=info msg="CreateContainer within sandbox \"a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:27:07.090622 containerd[1686]: time="2025-11-06T00:27:07.090439076Z" level=info msg="Container d8c8591bb513a864bd28c95282bbed9970a6d9582b86bb47d88e988a42d14d37: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:27:07.092205 systemd[1]: Started cri-containerd-eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4.scope - libcontainer container eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4. Nov 6 00:27:07.104348 containerd[1686]: time="2025-11-06T00:27:07.104319703Z" level=info msg="CreateContainer within sandbox \"a88044eb804014f3cef6d84282627c31c5c94ce599c91c869159264aeba4e4c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8c8591bb513a864bd28c95282bbed9970a6d9582b86bb47d88e988a42d14d37\"" Nov 6 00:27:07.105723 systemd-networkd[1488]: calieb7a681a864: Link UP Nov 6 00:27:07.106832 containerd[1686]: time="2025-11-06T00:27:07.106813744Z" level=info msg="StartContainer for \"d8c8591bb513a864bd28c95282bbed9970a6d9582b86bb47d88e988a42d14d37\"" Nov 6 00:27:07.107174 systemd-networkd[1488]: calieb7a681a864: Gained carrier Nov 6 00:27:07.118666 containerd[1686]: time="2025-11-06T00:27:07.117393094Z" level=info msg="connecting to shim d8c8591bb513a864bd28c95282bbed9970a6d9582b86bb47d88e988a42d14d37" address="unix:///run/containerd/s/a809ac57ba757d75c8fb1deee7e6aa8ba6c045b05beb7eeb7f1698682d8d4002" protocol=ttrpc version=3 Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:06.783 [INFO][5036] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0 calico-apiserver-6546975659- calico-apiserver 797f66e1-c3e4-4d4b-8032-18c5d22ec25c 827 0 2025-11-06 00:26:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6546975659 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-3bced53249 calico-apiserver-6546975659-nxnh9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb7a681a864 [] [] }} ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Namespace="calico-apiserver" Pod="calico-apiserver-6546975659-nxnh9" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:06.784 [INFO][5036] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Namespace="calico-apiserver" Pod="calico-apiserver-6546975659-nxnh9" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:06.840 [INFO][5063] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" HandleID="k8s-pod-network.09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:06.841 [INFO][5063] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" HandleID="k8s-pod-network.09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036f850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-3bced53249", "pod":"calico-apiserver-6546975659-nxnh9", "timestamp":"2025-11-06 00:27:06.840806634 +0000 UTC"}, Hostname:"ci-4459.1.0-n-3bced53249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:06.841 [INFO][5063] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:06.986 [INFO][5063] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:06.986 [INFO][5063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-3bced53249' Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.049 [INFO][5063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.056 [INFO][5063] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.065 [INFO][5063] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.069 [INFO][5063] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.072 [INFO][5063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.073 [INFO][5063] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.075 [INFO][5063] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.089 [INFO][5063] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.098 [INFO][5063] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.9/26] block=192.168.35.0/26 handle="k8s-pod-network.09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.098 [INFO][5063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.9/26] handle="k8s-pod-network.09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" host="ci-4459.1.0-n-3bced53249" Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.098 [INFO][5063] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:27:07.128651 containerd[1686]: 2025-11-06 00:27:07.098 [INFO][5063] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.9/26] IPv6=[] ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" HandleID="k8s-pod-network.09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Workload="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0" Nov 6 00:27:07.129695 containerd[1686]: 2025-11-06 00:27:07.101 [INFO][5036] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Namespace="calico-apiserver" Pod="calico-apiserver-6546975659-nxnh9" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0", GenerateName:"calico-apiserver-6546975659-", Namespace:"calico-apiserver", SelfLink:"", UID:"797f66e1-c3e4-4d4b-8032-18c5d22ec25c", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6546975659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"", Pod:"calico-apiserver-6546975659-nxnh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb7a681a864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:07.129695 containerd[1686]: 2025-11-06 00:27:07.101 [INFO][5036] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.9/32] ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Namespace="calico-apiserver" Pod="calico-apiserver-6546975659-nxnh9" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0" Nov 6 00:27:07.129695 containerd[1686]: 2025-11-06 00:27:07.101 [INFO][5036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb7a681a864 ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Namespace="calico-apiserver" Pod="calico-apiserver-6546975659-nxnh9" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0" Nov 6 00:27:07.129695 containerd[1686]: 2025-11-06 00:27:07.108 [INFO][5036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Namespace="calico-apiserver" Pod="calico-apiserver-6546975659-nxnh9" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0" Nov 6 00:27:07.129695 containerd[1686]: 2025-11-06 00:27:07.108 [INFO][5036] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Namespace="calico-apiserver" Pod="calico-apiserver-6546975659-nxnh9" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0", GenerateName:"calico-apiserver-6546975659-", Namespace:"calico-apiserver", SelfLink:"", UID:"797f66e1-c3e4-4d4b-8032-18c5d22ec25c", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6546975659", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-3bced53249", ContainerID:"09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d", Pod:"calico-apiserver-6546975659-nxnh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb7a681a864", MAC:"ee:c1:36:c9:df:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:27:07.129695 containerd[1686]: 2025-11-06 00:27:07.124 [INFO][5036] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" Namespace="calico-apiserver" Pod="calico-apiserver-6546975659-nxnh9" WorkloadEndpoint="ci--4459.1.0--n--3bced53249-k8s-calico--apiserver--6546975659--nxnh9-eth0" Nov 6 00:27:07.137017 systemd-networkd[1488]: calie4421d19dd9: Gained IPv6LL Nov 6 00:27:07.144014 systemd[1]: Started cri-containerd-d8c8591bb513a864bd28c95282bbed9970a6d9582b86bb47d88e988a42d14d37.scope - libcontainer container d8c8591bb513a864bd28c95282bbed9970a6d9582b86bb47d88e988a42d14d37. Nov 6 00:27:07.170685 containerd[1686]: time="2025-11-06T00:27:07.170553650Z" level=info msg="connecting to shim 09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d" address="unix:///run/containerd/s/d67bc240e4baa200c5a03b808eb93f239612435898d505165bcbbc5a213e6495" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:27:07.184621 containerd[1686]: time="2025-11-06T00:27:07.184540072Z" level=info msg="StartContainer for \"d8c8591bb513a864bd28c95282bbed9970a6d9582b86bb47d88e988a42d14d37\" returns successfully" Nov 6 00:27:07.207180 systemd[1]: Started cri-containerd-09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d.scope - libcontainer container 09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d. Nov 6 00:27:07.210807 containerd[1686]: time="2025-11-06T00:27:07.210735863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-2k7xr,Uid:4902c4e4-3977-4e0d-b87b-89acc6926de6,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb37cd3ca6f6902e68d0d0e96f2f1d9a368522e4b4fdea3afb9868183bc008d4\"" Nov 6 00:27:07.214156 containerd[1686]: time="2025-11-06T00:27:07.214132069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:27:07.282403 containerd[1686]: time="2025-11-06T00:27:07.282330055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6546975659-nxnh9,Uid:797f66e1-c3e4-4d4b-8032-18c5d22ec25c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"09f74ee5defa7ef5d1c0090f52946e6d60a923168d7d7f6c955f2d230bee576d\"" Nov 6 00:27:07.525890 containerd[1686]: time="2025-11-06T00:27:07.525725689Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:07.529659 containerd[1686]: time="2025-11-06T00:27:07.529623167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:07.530099 containerd[1686]: time="2025-11-06T00:27:07.529584392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:27:07.530384 kubelet[3152]: E1106 00:27:07.530348 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:27:07.530454 kubelet[3152]: E1106 00:27:07.530401 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:27:07.531956 containerd[1686]: time="2025-11-06T00:27:07.531930242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:27:07.532149 kubelet[3152]: E1106 00:27:07.532118 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2k7xr_calico-system(4902c4e4-3977-4e0d-b87b-89acc6926de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:07.532195 kubelet[3152]: E1106 00:27:07.532174 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:27:07.807414 kubelet[3152]: E1106 00:27:07.807284 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:27:07.812947 kubelet[3152]: E1106 00:27:07.812903 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:27:07.852137 containerd[1686]: time="2025-11-06T00:27:07.852009045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:07.854699 containerd[1686]: time="2025-11-06T00:27:07.854541894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:27:07.855371 containerd[1686]: time="2025-11-06T00:27:07.854788929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:07.855545 kubelet[3152]: E1106 00:27:07.855496 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:07.855617 kubelet[3152]: E1106 00:27:07.855604 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:07.856018 kubelet[3152]: E1106 00:27:07.855723 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6546975659-nxnh9_calico-apiserver(797f66e1-c3e4-4d4b-8032-18c5d22ec25c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:07.856131 kubelet[3152]: E1106 00:27:07.856114 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:27:07.856816 kubelet[3152]: I1106 00:27:07.856686 3152 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fhgxb" podStartSLOduration=41.856672812 podStartE2EDuration="41.856672812s" podCreationTimestamp="2025-11-06 00:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:27:07.83827005 +0000 UTC m=+47.299497503" watchObservedRunningTime="2025-11-06 00:27:07.856672812 +0000 UTC m=+47.317900191" Nov 6 00:27:08.097177 systemd-networkd[1488]: cali4a0a773f779: Gained IPv6LL Nov 6 00:27:08.290304 systemd-networkd[1488]: calidaa17cd4382: Gained IPv6LL Nov 6 00:27:08.737047 systemd-networkd[1488]: calieb7a681a864: Gained IPv6LL Nov 6 00:27:08.814622 kubelet[3152]: E1106 00:27:08.814587 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:27:08.817503 kubelet[3152]: E1106 00:27:08.815078 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:27:13.627904 containerd[1686]: time="2025-11-06T00:27:13.627818660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:27:13.877016 containerd[1686]: time="2025-11-06T00:27:13.876962578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:13.880018 containerd[1686]: time="2025-11-06T00:27:13.879618008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:27:13.880018 containerd[1686]: time="2025-11-06T00:27:13.879698210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:27:13.880120 kubelet[3152]: E1106 00:27:13.879979 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:27:13.880347 kubelet[3152]: E1106 00:27:13.880132 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:27:13.880347 kubelet[3152]: E1106 00:27:13.880203 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6455d868-gqpdl_calico-system(5f9595dd-def0-470b-b230-616c1ccc6ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:13.881818 containerd[1686]: time="2025-11-06T00:27:13.881796353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:27:14.128320 containerd[1686]: time="2025-11-06T00:27:14.128277272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:14.135059 containerd[1686]: time="2025-11-06T00:27:14.134986977Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:27:14.135059 containerd[1686]: time="2025-11-06T00:27:14.135051427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:27:14.135415 kubelet[3152]: E1106 00:27:14.135379 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:27:14.135501 kubelet[3152]: E1106 00:27:14.135423 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:27:14.135619 kubelet[3152]: E1106 00:27:14.135508 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6455d868-gqpdl_calico-system(5f9595dd-def0-470b-b230-616c1ccc6ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:14.135619 kubelet[3152]: E1106 00:27:14.135549 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:27:18.629303 containerd[1686]: time="2025-11-06T00:27:18.629055278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:27:18.870510 containerd[1686]: time="2025-11-06T00:27:18.870471837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:18.873078 containerd[1686]: time="2025-11-06T00:27:18.872990423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:27:18.873078 containerd[1686]: time="2025-11-06T00:27:18.873057998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:18.873353 kubelet[3152]: E1106 00:27:18.873283 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:18.873628 kubelet[3152]: E1106 00:27:18.873355 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:18.873628 kubelet[3152]: E1106 00:27:18.873430 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559b785f9-n2pht_calico-apiserver(c9883439-5e85-428a-8c5e-1baa916caf76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:18.873628 kubelet[3152]: E1106 00:27:18.873486 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:27:19.628164 containerd[1686]: time="2025-11-06T00:27:19.627484270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:27:19.928060 containerd[1686]: time="2025-11-06T00:27:19.928015763Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:19.931870 containerd[1686]: time="2025-11-06T00:27:19.931836473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:27:19.931961 containerd[1686]: time="2025-11-06T00:27:19.931899975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:27:19.932125 kubelet[3152]: E1106 00:27:19.932099 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:27:19.932508 kubelet[3152]: E1106 00:27:19.932134 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:27:19.932508 kubelet[3152]: E1106 00:27:19.932358 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-cdd9bb7bc-2b49s_calico-system(62834f18-0344-4626-bcdf-b650cdc6187d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:19.932611 containerd[1686]: time="2025-11-06T00:27:19.932410797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:27:19.932768 kubelet[3152]: E1106 00:27:19.932701 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:27:20.226361 containerd[1686]: time="2025-11-06T00:27:20.226254079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:20.243098 containerd[1686]: time="2025-11-06T00:27:20.243056122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:27:20.243197 containerd[1686]: time="2025-11-06T00:27:20.243135084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:27:20.243345 kubelet[3152]: E1106 00:27:20.243303 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:27:20.243402 kubelet[3152]: E1106 00:27:20.243356 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:27:20.243674 kubelet[3152]: E1106 00:27:20.243653 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4j9vt_calico-system(9c757a1d-95f3-4cbd-9adf-b65065b2eb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:20.243762 containerd[1686]: time="2025-11-06T00:27:20.243735576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:27:20.494174 containerd[1686]: time="2025-11-06T00:27:20.494077520Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:20.497846 containerd[1686]: time="2025-11-06T00:27:20.497805694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:27:20.497923 containerd[1686]: time="2025-11-06T00:27:20.497874315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:20.498042 kubelet[3152]: E1106 00:27:20.498012 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:20.498116 kubelet[3152]: E1106 00:27:20.498051 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:20.498277 kubelet[3152]: E1106 00:27:20.498249 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6546975659-nxnh9_calico-apiserver(797f66e1-c3e4-4d4b-8032-18c5d22ec25c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:20.498357 containerd[1686]: time="2025-11-06T00:27:20.498315659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:27:20.498559 kubelet[3152]: E1106 00:27:20.498532 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:27:20.737274 containerd[1686]: time="2025-11-06T00:27:20.737238581Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:20.739783 containerd[1686]: time="2025-11-06T00:27:20.739755894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:27:20.739848 containerd[1686]: time="2025-11-06T00:27:20.739809495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:27:20.739952 kubelet[3152]: E1106 00:27:20.739929 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:27:20.739995 kubelet[3152]: E1106 00:27:20.739960 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:27:20.740056 kubelet[3152]: E1106 00:27:20.740029 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4j9vt_calico-system(9c757a1d-95f3-4cbd-9adf-b65065b2eb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:20.740139 kubelet[3152]: E1106 00:27:20.740080 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:27:21.627408 containerd[1686]: time="2025-11-06T00:27:21.627372449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:27:21.876745 containerd[1686]: time="2025-11-06T00:27:21.876701269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:21.880105 containerd[1686]: time="2025-11-06T00:27:21.879973987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:27:21.880383 containerd[1686]: time="2025-11-06T00:27:21.880146764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:21.880662 kubelet[3152]: E1106 00:27:21.880460 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:21.880662 kubelet[3152]: E1106 00:27:21.880498 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:21.880662 kubelet[3152]: E1106 00:27:21.880586 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559b785f9-fhvmw_calico-apiserver(c86a3ffd-cf2f-4e08-9736-f5e39ae366f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:21.880662 kubelet[3152]: E1106 00:27:21.880614 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:27:22.628211 containerd[1686]: time="2025-11-06T00:27:22.628087842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:27:22.877327 containerd[1686]: time="2025-11-06T00:27:22.877282067Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:22.880376 containerd[1686]: time="2025-11-06T00:27:22.880144251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:27:22.880617 kubelet[3152]: E1106 00:27:22.880312 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:27:22.880617 kubelet[3152]: E1106 00:27:22.880348 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:27:22.880617 kubelet[3152]: E1106 00:27:22.880422 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2k7xr_calico-system(4902c4e4-3977-4e0d-b87b-89acc6926de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:22.880617 kubelet[3152]: E1106 00:27:22.880458 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:27:22.881103 containerd[1686]: time="2025-11-06T00:27:22.880899074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:27.627768 kubelet[3152]: E1106 00:27:27.627725 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:27:28.830101 containerd[1686]: time="2025-11-06T00:27:28.830051636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981\" id:\"65c1bf3a4554802a45659d7860224b58880cb84c4c83f5b24d8d2a44c208651d\" pid:5316 exited_at:{seconds:1762388848 nanos:829797429}" Nov 6 00:27:32.629571 kubelet[3152]: E1106 00:27:32.629143 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:27:34.628511 kubelet[3152]: E1106 00:27:34.627385 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:27:35.627803 kubelet[3152]: E1106 00:27:35.627548 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:27:35.627803 kubelet[3152]: E1106 00:27:35.627543 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:27:35.627803 kubelet[3152]: E1106 00:27:35.627617 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:27:35.629154 kubelet[3152]: E1106 00:27:35.629082 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:27:40.629804 containerd[1686]: time="2025-11-06T00:27:40.629767565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:27:40.881079 containerd[1686]: time="2025-11-06T00:27:40.880451490Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:40.883837 containerd[1686]: time="2025-11-06T00:27:40.883752708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:27:40.883837 containerd[1686]: time="2025-11-06T00:27:40.883775924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:27:40.884060 kubelet[3152]: E1106 00:27:40.883970 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:27:40.884294 kubelet[3152]: E1106 00:27:40.884063 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:27:40.884294 kubelet[3152]: E1106 00:27:40.884144 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6455d868-gqpdl_calico-system(5f9595dd-def0-470b-b230-616c1ccc6ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:40.885161 containerd[1686]: time="2025-11-06T00:27:40.885139258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:27:41.140154 containerd[1686]: time="2025-11-06T00:27:41.140032075Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:41.142490 containerd[1686]: time="2025-11-06T00:27:41.142460851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:27:41.142567 containerd[1686]: time="2025-11-06T00:27:41.142533148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:27:41.142714 kubelet[3152]: E1106 00:27:41.142683 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:27:41.142790 kubelet[3152]: E1106 00:27:41.142725 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:27:41.142834 kubelet[3152]: E1106 00:27:41.142809 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6455d868-gqpdl_calico-system(5f9595dd-def0-470b-b230-616c1ccc6ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:41.142940 kubelet[3152]: E1106 00:27:41.142853 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:27:43.627619 containerd[1686]: time="2025-11-06T00:27:43.627462708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:27:43.887804 containerd[1686]: time="2025-11-06T00:27:43.887677418Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:43.890780 containerd[1686]: time="2025-11-06T00:27:43.890708785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:27:43.890780 containerd[1686]: time="2025-11-06T00:27:43.890730349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:43.890961 kubelet[3152]: E1106 00:27:43.890923 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:43.891239 kubelet[3152]: E1106 00:27:43.890968 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:43.891239 kubelet[3152]: E1106 00:27:43.891066 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559b785f9-n2pht_calico-apiserver(c9883439-5e85-428a-8c5e-1baa916caf76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:43.891805 kubelet[3152]: E1106 00:27:43.891099 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:27:47.629398 containerd[1686]: time="2025-11-06T00:27:47.629102484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:27:47.882014 containerd[1686]: time="2025-11-06T00:27:47.881813103Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:47.884270 containerd[1686]: time="2025-11-06T00:27:47.884160696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:27:47.884270 containerd[1686]: time="2025-11-06T00:27:47.884246671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:27:47.884531 kubelet[3152]: E1106 00:27:47.884493 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:27:47.884903 kubelet[3152]: E1106 00:27:47.884629 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:27:47.885330 kubelet[3152]: E1106 00:27:47.884977 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4j9vt_calico-system(9c757a1d-95f3-4cbd-9adf-b65065b2eb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:47.887115 containerd[1686]: time="2025-11-06T00:27:47.886746763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:27:48.132409 containerd[1686]: time="2025-11-06T00:27:48.132291138Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:48.134619 containerd[1686]: time="2025-11-06T00:27:48.134578328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:27:48.134723 containerd[1686]: time="2025-11-06T00:27:48.134662200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:27:48.134852 kubelet[3152]: E1106 00:27:48.134814 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:27:48.134920 kubelet[3152]: E1106 00:27:48.134859 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:27:48.134970 kubelet[3152]: E1106 00:27:48.134954 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4j9vt_calico-system(9c757a1d-95f3-4cbd-9adf-b65065b2eb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:48.135291 kubelet[3152]: E1106 00:27:48.135261 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:27:48.629643 containerd[1686]: time="2025-11-06T00:27:48.629581002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:27:48.915005 containerd[1686]: time="2025-11-06T00:27:48.914841721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:48.917594 containerd[1686]: time="2025-11-06T00:27:48.917532346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:27:48.917594 containerd[1686]: time="2025-11-06T00:27:48.917576156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:48.917772 kubelet[3152]: E1106 00:27:48.917701 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:27:48.918080 kubelet[3152]: E1106 00:27:48.917780 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:27:48.918080 kubelet[3152]: E1106 00:27:48.918036 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2k7xr_calico-system(4902c4e4-3977-4e0d-b87b-89acc6926de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:48.918129 kubelet[3152]: E1106 00:27:48.918075 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:27:48.918426 containerd[1686]: time="2025-11-06T00:27:48.918291240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:27:49.161454 containerd[1686]: time="2025-11-06T00:27:49.161422829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:49.165820 containerd[1686]: time="2025-11-06T00:27:49.165547998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:27:49.165820 containerd[1686]: time="2025-11-06T00:27:49.165608536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:27:49.166135 kubelet[3152]: E1106 00:27:49.165715 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:27:49.166135 kubelet[3152]: E1106 00:27:49.165767 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:27:49.166135 kubelet[3152]: E1106 00:27:49.165864 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-cdd9bb7bc-2b49s_calico-system(62834f18-0344-4626-bcdf-b650cdc6187d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:49.166135 kubelet[3152]: E1106 00:27:49.165920 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:27:49.629330 containerd[1686]: time="2025-11-06T00:27:49.629093293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:27:49.873710 containerd[1686]: time="2025-11-06T00:27:49.873549202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:49.876224 containerd[1686]: time="2025-11-06T00:27:49.876171050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:27:49.876441 containerd[1686]: time="2025-11-06T00:27:49.876204169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:49.876738 kubelet[3152]: E1106 00:27:49.876707 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:49.876802 kubelet[3152]: E1106 00:27:49.876751 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:49.878408 kubelet[3152]: E1106 00:27:49.878380 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6546975659-nxnh9_calico-apiserver(797f66e1-c3e4-4d4b-8032-18c5d22ec25c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:49.878518 kubelet[3152]: E1106 00:27:49.878421 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:27:49.878962 containerd[1686]: time="2025-11-06T00:27:49.878901535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:27:50.119636 containerd[1686]: time="2025-11-06T00:27:50.119596246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:27:50.122266 containerd[1686]: time="2025-11-06T00:27:50.122244013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:27:50.122322 containerd[1686]: time="2025-11-06T00:27:50.122247858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:27:50.122465 kubelet[3152]: E1106 00:27:50.122430 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:50.124839 kubelet[3152]: E1106 00:27:50.122475 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:27:50.124839 kubelet[3152]: E1106 00:27:50.122547 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559b785f9-fhvmw_calico-apiserver(c86a3ffd-cf2f-4e08-9736-f5e39ae366f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:27:50.124839 kubelet[3152]: E1106 00:27:50.122578 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:27:56.632332 kubelet[3152]: E1106 00:27:56.632276 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:27:58.628318 kubelet[3152]: E1106 00:27:58.628281 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:27:58.830912 containerd[1686]: time="2025-11-06T00:27:58.830826915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981\" id:\"ff2f74769b0394dc533c3bbfa3edae6c446dd4483285344270de9e281d8540b1\" pid:5352 exited_at:{seconds:1762388878 nanos:830595278}" Nov 6 00:28:00.631910 kubelet[3152]: E1106 00:28:00.631348 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:28:02.627966 kubelet[3152]: E1106 00:28:02.627359 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:28:02.631338 kubelet[3152]: E1106 00:28:02.631300 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:28:03.627922 kubelet[3152]: E1106 00:28:03.627693 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:28:03.628711 kubelet[3152]: E1106 00:28:03.628092 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:28:07.629697 kubelet[3152]: E1106 00:28:07.628977 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:28:07.856475 systemd[1]: Started sshd@7-10.200.8.20:22-10.200.16.10:38864.service - OpenSSH per-connection server daemon (10.200.16.10:38864). Nov 6 00:28:08.486752 sshd[5369]: Accepted publickey for core from 10.200.16.10 port 38864 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:08.487966 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:08.493355 systemd-logind[1674]: New session 10 of user core. Nov 6 00:28:08.499061 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:28:09.034865 sshd[5372]: Connection closed by 10.200.16.10 port 38864 Nov 6 00:28:09.036346 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:09.040939 systemd-logind[1674]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:28:09.041049 systemd[1]: sshd@7-10.200.8.20:22-10.200.16.10:38864.service: Deactivated successfully. Nov 6 00:28:09.043208 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:28:09.045791 systemd-logind[1674]: Removed session 10. Nov 6 00:28:11.628240 kubelet[3152]: E1106 00:28:11.628142 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:28:12.630447 kubelet[3152]: E1106 00:28:12.629988 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:28:14.151865 systemd[1]: Started sshd@8-10.200.8.20:22-10.200.16.10:53608.service - OpenSSH per-connection server daemon (10.200.16.10:53608). Nov 6 00:28:14.789546 sshd[5384]: Accepted publickey for core from 10.200.16.10 port 53608 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:14.791190 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:14.799798 systemd-logind[1674]: New session 11 of user core. Nov 6 00:28:14.804284 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:28:15.326593 sshd[5387]: Connection closed by 10.200.16.10 port 53608 Nov 6 00:28:15.327047 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:15.329789 systemd-logind[1674]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:28:15.332279 systemd[1]: sshd@8-10.200.8.20:22-10.200.16.10:53608.service: Deactivated successfully. Nov 6 00:28:15.335081 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:28:15.340825 systemd-logind[1674]: Removed session 11. Nov 6 00:28:15.628424 kubelet[3152]: E1106 00:28:15.627500 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:28:15.628424 kubelet[3152]: E1106 00:28:15.628374 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:28:16.631354 kubelet[3152]: E1106 00:28:16.631316 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:28:17.627161 kubelet[3152]: E1106 00:28:17.627105 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:28:20.437993 systemd[1]: Started sshd@9-10.200.8.20:22-10.200.16.10:51850.service - OpenSSH per-connection server daemon (10.200.16.10:51850). Nov 6 00:28:21.064961 sshd[5400]: Accepted publickey for core from 10.200.16.10 port 51850 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:21.066684 sshd-session[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:21.071348 systemd-logind[1674]: New session 12 of user core. Nov 6 00:28:21.078154 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:28:21.567899 sshd[5405]: Connection closed by 10.200.16.10 port 51850 Nov 6 00:28:21.568057 sshd-session[5400]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:21.572152 systemd-logind[1674]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:28:21.573057 systemd[1]: sshd@9-10.200.8.20:22-10.200.16.10:51850.service: Deactivated successfully. Nov 6 00:28:21.575491 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:28:21.578492 systemd-logind[1674]: Removed session 12. Nov 6 00:28:21.629066 containerd[1686]: time="2025-11-06T00:28:21.628944106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:28:21.678174 systemd[1]: Started sshd@10-10.200.8.20:22-10.200.16.10:51852.service - OpenSSH per-connection server daemon (10.200.16.10:51852). Nov 6 00:28:21.864240 containerd[1686]: time="2025-11-06T00:28:21.864166186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:28:21.866900 containerd[1686]: time="2025-11-06T00:28:21.866833297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:28:21.867102 containerd[1686]: time="2025-11-06T00:28:21.867030093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:28:21.867371 kubelet[3152]: E1106 00:28:21.867310 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:28:21.868013 kubelet[3152]: E1106 00:28:21.867352 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:28:21.868013 kubelet[3152]: E1106 00:28:21.867621 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6455d868-gqpdl_calico-system(5f9595dd-def0-470b-b230-616c1ccc6ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:28:21.869794 containerd[1686]: time="2025-11-06T00:28:21.869436587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:28:22.110719 containerd[1686]: time="2025-11-06T00:28:22.110688349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:28:22.113171 containerd[1686]: time="2025-11-06T00:28:22.113147360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:28:22.113228 containerd[1686]: time="2025-11-06T00:28:22.113203044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:28:22.113367 kubelet[3152]: E1106 00:28:22.113315 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:28:22.113403 kubelet[3152]: E1106 00:28:22.113374 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:28:22.113467 kubelet[3152]: E1106 00:28:22.113450 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6455d868-gqpdl_calico-system(5f9595dd-def0-470b-b230-616c1ccc6ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:28:22.113800 kubelet[3152]: E1106 00:28:22.113500 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:28:22.315443 sshd[5424]: Accepted publickey for core from 10.200.16.10 port 51852 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:22.316432 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:22.319963 systemd-logind[1674]: New session 13 of user core. Nov 6 00:28:22.325581 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:28:22.825792 sshd[5427]: Connection closed by 10.200.16.10 port 51852 Nov 6 00:28:22.827033 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:22.830022 systemd-logind[1674]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:28:22.831225 systemd[1]: sshd@10-10.200.8.20:22-10.200.16.10:51852.service: Deactivated successfully. Nov 6 00:28:22.833666 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:28:22.838427 systemd-logind[1674]: Removed session 13. Nov 6 00:28:22.938643 systemd[1]: Started sshd@11-10.200.8.20:22-10.200.16.10:51864.service - OpenSSH per-connection server daemon (10.200.16.10:51864). Nov 6 00:28:23.573565 sshd[5436]: Accepted publickey for core from 10.200.16.10 port 51864 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:23.574859 sshd-session[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:23.578979 systemd-logind[1674]: New session 14 of user core. Nov 6 00:28:23.584002 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:28:24.074832 sshd[5442]: Connection closed by 10.200.16.10 port 51864 Nov 6 00:28:24.075315 sshd-session[5436]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:24.078628 systemd[1]: sshd@11-10.200.8.20:22-10.200.16.10:51864.service: Deactivated successfully. Nov 6 00:28:24.080892 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:28:24.082535 systemd-logind[1674]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:28:24.084227 systemd-logind[1674]: Removed session 14. Nov 6 00:28:24.628659 containerd[1686]: time="2025-11-06T00:28:24.627909667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:28:24.882240 containerd[1686]: time="2025-11-06T00:28:24.882004093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:28:24.884895 containerd[1686]: time="2025-11-06T00:28:24.884599424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:28:24.884895 containerd[1686]: time="2025-11-06T00:28:24.884690774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:28:24.885002 kubelet[3152]: E1106 00:28:24.884970 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:28:24.885239 kubelet[3152]: E1106 00:28:24.885009 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:28:24.885239 kubelet[3152]: E1106 00:28:24.885074 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559b785f9-n2pht_calico-apiserver(c9883439-5e85-428a-8c5e-1baa916caf76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:28:24.885239 kubelet[3152]: E1106 00:28:24.885104 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:28:25.628314 kubelet[3152]: E1106 00:28:25.628269 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:28:28.629713 kubelet[3152]: E1106 00:28:28.628867 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:28:28.629713 kubelet[3152]: E1106 00:28:28.629509 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:28:28.833102 containerd[1686]: time="2025-11-06T00:28:28.832845765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981\" id:\"d310bc1e63217c24951605e50876c61a92743030dde5e647e2c763c0c995d215\" pid:5470 exited_at:{seconds:1762388908 nanos:832267228}" Nov 6 00:28:29.190923 systemd[1]: Started sshd@12-10.200.8.20:22-10.200.16.10:51868.service - OpenSSH per-connection server daemon (10.200.16.10:51868). Nov 6 00:28:29.627925 kubelet[3152]: E1106 00:28:29.627215 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:28:29.817898 sshd[5483]: Accepted publickey for core from 10.200.16.10 port 51868 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:29.820088 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:29.827341 systemd-logind[1674]: New session 15 of user core. Nov 6 00:28:29.829075 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:28:30.310690 sshd[5486]: Connection closed by 10.200.16.10 port 51868 Nov 6 00:28:30.311209 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:30.314099 systemd[1]: sshd@12-10.200.8.20:22-10.200.16.10:51868.service: Deactivated successfully. Nov 6 00:28:30.315802 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:28:30.316577 systemd-logind[1674]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:28:30.318012 systemd-logind[1674]: Removed session 15. Nov 6 00:28:31.628571 containerd[1686]: time="2025-11-06T00:28:31.628314409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:28:31.886083 containerd[1686]: time="2025-11-06T00:28:31.885987045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:28:31.888478 containerd[1686]: time="2025-11-06T00:28:31.888440091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:28:31.888645 containerd[1686]: time="2025-11-06T00:28:31.888522159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:28:31.888674 kubelet[3152]: E1106 00:28:31.888647 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:28:31.889608 kubelet[3152]: E1106 00:28:31.888682 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:28:31.889608 kubelet[3152]: E1106 00:28:31.888765 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559b785f9-fhvmw_calico-apiserver(c86a3ffd-cf2f-4e08-9736-f5e39ae366f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:28:31.889608 kubelet[3152]: E1106 00:28:31.888805 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:28:35.428116 systemd[1]: Started sshd@13-10.200.8.20:22-10.200.16.10:39248.service - OpenSSH per-connection server daemon (10.200.16.10:39248). Nov 6 00:28:36.073242 sshd[5513]: Accepted publickey for core from 10.200.16.10 port 39248 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:36.074694 sshd-session[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:36.082468 systemd-logind[1674]: New session 16 of user core. Nov 6 00:28:36.086475 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:28:36.565946 sshd[5523]: Connection closed by 10.200.16.10 port 39248 Nov 6 00:28:36.567217 sshd-session[5513]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:36.570118 systemd-logind[1674]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:28:36.570413 systemd[1]: sshd@13-10.200.8.20:22-10.200.16.10:39248.service: Deactivated successfully. Nov 6 00:28:36.572500 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:28:36.573925 systemd-logind[1674]: Removed session 16. Nov 6 00:28:36.631909 containerd[1686]: time="2025-11-06T00:28:36.631559823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:28:36.632465 kubelet[3152]: E1106 00:28:36.632428 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:28:36.867497 containerd[1686]: time="2025-11-06T00:28:36.867412535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:28:36.870222 containerd[1686]: time="2025-11-06T00:28:36.870189687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:28:36.870272 containerd[1686]: time="2025-11-06T00:28:36.870245317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:28:36.870580 kubelet[3152]: E1106 00:28:36.870381 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:28:36.870580 kubelet[3152]: E1106 00:28:36.870413 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:28:36.870580 kubelet[3152]: E1106 00:28:36.870479 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4j9vt_calico-system(9c757a1d-95f3-4cbd-9adf-b65065b2eb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:28:36.871899 containerd[1686]: time="2025-11-06T00:28:36.871218434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:28:37.112754 containerd[1686]: time="2025-11-06T00:28:37.112724542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:28:37.116166 containerd[1686]: time="2025-11-06T00:28:37.116130730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:28:37.116256 containerd[1686]: time="2025-11-06T00:28:37.116156489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:28:37.116335 kubelet[3152]: E1106 00:28:37.116300 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:28:37.116388 kubelet[3152]: E1106 00:28:37.116346 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:28:37.116420 kubelet[3152]: E1106 00:28:37.116408 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4j9vt_calico-system(9c757a1d-95f3-4cbd-9adf-b65065b2eb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:28:37.116588 kubelet[3152]: E1106 00:28:37.116449 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:28:38.628818 kubelet[3152]: E1106 00:28:38.627963 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:28:40.628900 containerd[1686]: time="2025-11-06T00:28:40.628842449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:28:40.866197 containerd[1686]: time="2025-11-06T00:28:40.866144317Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:28:40.868855 containerd[1686]: time="2025-11-06T00:28:40.868817188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:28:40.869229 containerd[1686]: time="2025-11-06T00:28:40.869212378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:28:40.869365 kubelet[3152]: E1106 00:28:40.869337 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:28:40.869615 kubelet[3152]: E1106 00:28:40.869375 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:28:40.869615 kubelet[3152]: E1106 00:28:40.869439 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-2k7xr_calico-system(4902c4e4-3977-4e0d-b87b-89acc6926de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:28:40.869615 kubelet[3152]: E1106 00:28:40.869468 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:28:41.681124 systemd[1]: Started sshd@14-10.200.8.20:22-10.200.16.10:43438.service - OpenSSH per-connection server daemon (10.200.16.10:43438). Nov 6 00:28:42.309917 sshd[5536]: Accepted publickey for core from 10.200.16.10 port 43438 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:42.310924 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:42.314931 systemd-logind[1674]: New session 17 of user core. Nov 6 00:28:42.321007 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:28:42.627961 containerd[1686]: time="2025-11-06T00:28:42.627733979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:28:42.825281 sshd[5539]: Connection closed by 10.200.16.10 port 43438 Nov 6 00:28:42.826374 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:42.830270 systemd[1]: sshd@14-10.200.8.20:22-10.200.16.10:43438.service: Deactivated successfully. Nov 6 00:28:42.832642 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:28:42.834062 systemd-logind[1674]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:28:42.837224 systemd-logind[1674]: Removed session 17. Nov 6 00:28:42.870644 containerd[1686]: time="2025-11-06T00:28:42.870613885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:28:42.873646 containerd[1686]: time="2025-11-06T00:28:42.873606088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:28:42.873724 containerd[1686]: time="2025-11-06T00:28:42.873681567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:28:42.874025 kubelet[3152]: E1106 00:28:42.873929 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:28:42.874025 kubelet[3152]: E1106 00:28:42.873982 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:28:42.874508 kubelet[3152]: E1106 00:28:42.874146 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6546975659-nxnh9_calico-apiserver(797f66e1-c3e4-4d4b-8032-18c5d22ec25c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:28:42.874508 kubelet[3152]: E1106 00:28:42.874457 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:28:42.941082 systemd[1]: Started sshd@15-10.200.8.20:22-10.200.16.10:43452.service - OpenSSH per-connection server daemon (10.200.16.10:43452). Nov 6 00:28:43.581777 sshd[5551]: Accepted publickey for core from 10.200.16.10 port 43452 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:43.582163 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:43.585549 systemd-logind[1674]: New session 18 of user core. Nov 6 00:28:43.592038 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:28:43.628844 containerd[1686]: time="2025-11-06T00:28:43.628818175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:28:43.864672 containerd[1686]: time="2025-11-06T00:28:43.864576349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:28:43.867123 containerd[1686]: time="2025-11-06T00:28:43.867080559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:28:43.867290 containerd[1686]: time="2025-11-06T00:28:43.867094642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:28:43.867387 kubelet[3152]: E1106 00:28:43.867362 3152 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:28:43.867437 kubelet[3152]: E1106 00:28:43.867394 3152 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:28:43.867741 kubelet[3152]: E1106 00:28:43.867478 3152 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-cdd9bb7bc-2b49s_calico-system(62834f18-0344-4626-bcdf-b650cdc6187d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:28:43.867844 kubelet[3152]: E1106 00:28:43.867760 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:28:44.182822 sshd[5554]: Connection closed by 10.200.16.10 port 43452 Nov 6 00:28:44.181724 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:44.184676 systemd-logind[1674]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:28:44.185418 systemd[1]: sshd@15-10.200.8.20:22-10.200.16.10:43452.service: Deactivated successfully. Nov 6 00:28:44.188848 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:28:44.192101 systemd-logind[1674]: Removed session 18. Nov 6 00:28:44.300103 systemd[1]: Started sshd@16-10.200.8.20:22-10.200.16.10:43466.service - OpenSSH per-connection server daemon (10.200.16.10:43466). Nov 6 00:28:44.629327 kubelet[3152]: E1106 00:28:44.628516 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:28:44.935945 sshd[5564]: Accepted publickey for core from 10.200.16.10 port 43466 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:44.937672 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:44.941995 systemd-logind[1674]: New session 19 of user core. Nov 6 00:28:44.951033 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:28:46.055506 sshd[5567]: Connection closed by 10.200.16.10 port 43466 Nov 6 00:28:46.056474 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:46.059515 systemd-logind[1674]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:28:46.061421 systemd[1]: sshd@16-10.200.8.20:22-10.200.16.10:43466.service: Deactivated successfully. Nov 6 00:28:46.064322 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:28:46.066337 systemd-logind[1674]: Removed session 19. Nov 6 00:28:46.171150 systemd[1]: Started sshd@17-10.200.8.20:22-10.200.16.10:43478.service - OpenSSH per-connection server daemon (10.200.16.10:43478). Nov 6 00:28:46.805607 sshd[5584]: Accepted publickey for core from 10.200.16.10 port 43478 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:46.806013 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:46.810226 systemd-logind[1674]: New session 20 of user core. Nov 6 00:28:46.820017 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:28:47.491043 sshd[5587]: Connection closed by 10.200.16.10 port 43478 Nov 6 00:28:47.491595 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:47.495374 systemd[1]: sshd@17-10.200.8.20:22-10.200.16.10:43478.service: Deactivated successfully. Nov 6 00:28:47.499307 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:28:47.502368 systemd-logind[1674]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:28:47.504768 systemd-logind[1674]: Removed session 20. Nov 6 00:28:47.610252 systemd[1]: Started sshd@18-10.200.8.20:22-10.200.16.10:43482.service - OpenSSH per-connection server daemon (10.200.16.10:43482). Nov 6 00:28:48.245573 sshd[5599]: Accepted publickey for core from 10.200.16.10 port 43482 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:48.246438 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:48.250213 systemd-logind[1674]: New session 21 of user core. Nov 6 00:28:48.254027 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:28:48.750343 sshd[5602]: Connection closed by 10.200.16.10 port 43482 Nov 6 00:28:48.750834 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:48.754635 systemd-logind[1674]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:28:48.755488 systemd[1]: sshd@18-10.200.8.20:22-10.200.16.10:43482.service: Deactivated successfully. Nov 6 00:28:48.758621 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:28:48.764266 systemd-logind[1674]: Removed session 21. Nov 6 00:28:51.627849 kubelet[3152]: E1106 00:28:51.627245 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:28:51.629694 kubelet[3152]: E1106 00:28:51.629652 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:28:51.629851 kubelet[3152]: E1106 00:28:51.629828 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:28:52.629749 kubelet[3152]: E1106 00:28:52.629266 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:28:53.867297 systemd[1]: Started sshd@19-10.200.8.20:22-10.200.16.10:53880.service - OpenSSH per-connection server daemon (10.200.16.10:53880). Nov 6 00:28:54.499206 sshd[5616]: Accepted publickey for core from 10.200.16.10 port 53880 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:28:54.500184 sshd-session[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:54.504270 systemd-logind[1674]: New session 22 of user core. Nov 6 00:28:54.509032 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:28:55.012441 sshd[5620]: Connection closed by 10.200.16.10 port 53880 Nov 6 00:28:55.014054 sshd-session[5616]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:55.018220 systemd-logind[1674]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:28:55.018798 systemd[1]: sshd@19-10.200.8.20:22-10.200.16.10:53880.service: Deactivated successfully. Nov 6 00:28:55.020378 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:28:55.021437 systemd-logind[1674]: Removed session 22. Nov 6 00:28:57.627910 kubelet[3152]: E1106 00:28:57.627728 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:28:58.630082 kubelet[3152]: E1106 00:28:58.629422 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:28:58.634619 kubelet[3152]: E1106 00:28:58.634572 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:28:58.832797 containerd[1686]: time="2025-11-06T00:28:58.832761849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80581d922573d0643232f6b1aec8e4023217d6ef06e716dd6684238cc8d83981\" id:\"b18c9ff9b2076bb49e4455e366cce46d045002ae29c0e04e7fabe9c202ca4f24\" pid:5644 exited_at:{seconds:1762388938 nanos:832546897}" Nov 6 00:29:00.136759 systemd[1]: Started sshd@20-10.200.8.20:22-10.200.16.10:52462.service - OpenSSH per-connection server daemon (10.200.16.10:52462). Nov 6 00:29:00.764819 sshd[5657]: Accepted publickey for core from 10.200.16.10 port 52462 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:29:00.765840 sshd-session[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:00.769774 systemd-logind[1674]: New session 23 of user core. Nov 6 00:29:00.775092 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:29:01.301401 sshd[5660]: Connection closed by 10.200.16.10 port 52462 Nov 6 00:29:01.301856 sshd-session[5657]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:01.304807 systemd[1]: sshd@20-10.200.8.20:22-10.200.16.10:52462.service: Deactivated successfully. Nov 6 00:29:01.306389 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:29:01.307371 systemd-logind[1674]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:29:01.308203 systemd-logind[1674]: Removed session 23. Nov 6 00:29:02.627421 kubelet[3152]: E1106 00:29:02.627368 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76" Nov 6 00:29:02.629518 kubelet[3152]: E1106 00:29:02.629436 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:29:03.630181 kubelet[3152]: E1106 00:29:03.630136 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:29:06.417150 systemd[1]: Started sshd@21-10.200.8.20:22-10.200.16.10:52474.service - OpenSSH per-connection server daemon (10.200.16.10:52474). Nov 6 00:29:07.051981 sshd[5673]: Accepted publickey for core from 10.200.16.10 port 52474 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:29:07.052917 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:07.056832 systemd-logind[1674]: New session 24 of user core. Nov 6 00:29:07.064026 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:29:07.542277 sshd[5676]: Connection closed by 10.200.16.10 port 52474 Nov 6 00:29:07.542773 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:07.545601 systemd[1]: sshd@21-10.200.8.20:22-10.200.16.10:52474.service: Deactivated successfully. Nov 6 00:29:07.547356 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:29:07.548032 systemd-logind[1674]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:29:07.549139 systemd-logind[1674]: Removed session 24. Nov 6 00:29:07.628428 kubelet[3152]: E1106 00:29:07.628390 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-2k7xr" podUID="4902c4e4-3977-4e0d-b87b-89acc6926de6" Nov 6 00:29:10.629920 kubelet[3152]: E1106 00:29:10.628852 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cdd9bb7bc-2b49s" podUID="62834f18-0344-4626-bcdf-b650cdc6187d" Nov 6 00:29:11.627282 kubelet[3152]: E1106 00:29:11.627244 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6546975659-nxnh9" podUID="797f66e1-c3e4-4d4b-8032-18c5d22ec25c" Nov 6 00:29:12.628850 kubelet[3152]: E1106 00:29:12.627766 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-fhvmw" podUID="c86a3ffd-cf2f-4e08-9736-f5e39ae366f1" Nov 6 00:29:12.657367 systemd[1]: Started sshd@22-10.200.8.20:22-10.200.16.10:54782.service - OpenSSH per-connection server daemon (10.200.16.10:54782). Nov 6 00:29:13.304821 sshd[5688]: Accepted publickey for core from 10.200.16.10 port 54782 ssh2: RSA SHA256:GgVgzeKdNz/6ufBGEfuPWDEbcS5ymkddUfBaUbl3B1Q Nov 6 00:29:13.305235 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:13.308640 systemd-logind[1674]: New session 25 of user core. Nov 6 00:29:13.317035 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:29:13.799837 sshd[5691]: Connection closed by 10.200.16.10 port 54782 Nov 6 00:29:13.800278 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:13.803080 systemd[1]: sshd@22-10.200.8.20:22-10.200.16.10:54782.service: Deactivated successfully. Nov 6 00:29:13.804794 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:29:13.805948 systemd-logind[1674]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:29:13.807222 systemd-logind[1674]: Removed session 25. Nov 6 00:29:14.631104 kubelet[3152]: E1106 00:29:14.631057 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6455d868-gqpdl" podUID="5f9595dd-def0-470b-b230-616c1ccc6ebf" Nov 6 00:29:14.633369 kubelet[3152]: E1106 00:29:14.633335 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j9vt" podUID="9c757a1d-95f3-4cbd-9adf-b65065b2eb8c" Nov 6 00:29:15.629904 kubelet[3152]: E1106 00:29:15.629535 3152 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559b785f9-n2pht" podUID="c9883439-5e85-428a-8c5e-1baa916caf76"