Nov 24 06:45:47.874432 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 06:45:47.874455 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 06:45:47.874463 kernel: BIOS-provided physical RAM map: Nov 24 06:45:47.874470 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 24 06:45:47.874477 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 24 06:45:47.874485 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Nov 24 06:45:47.874493 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 24 06:45:47.874500 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Nov 24 06:45:47.874506 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 24 06:45:47.874513 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 24 06:45:47.874520 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 24 06:45:47.874527 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 24 06:45:47.874533 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 24 06:45:47.874540 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 24 06:45:47.874550 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 24 06:45:47.874557 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 24 06:45:47.874564 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 24 06:45:47.874571 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 24 06:45:47.874578 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 24 06:45:47.874587 kernel: NX (Execute Disable) protection: active Nov 24 06:45:47.874594 kernel: APIC: Static calls initialized Nov 24 06:45:47.874601 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Nov 24 06:45:47.874608 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Nov 24 06:45:47.874615 kernel: extended physical RAM map: Nov 24 06:45:47.874622 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Nov 24 06:45:47.874630 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Nov 24 06:45:47.874637 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Nov 24 06:45:47.874644 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Nov 24 06:45:47.874651 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Nov 24 06:45:47.874658 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Nov 24 06:45:47.874667 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Nov 24 06:45:47.874674 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Nov 24 06:45:47.874681 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Nov 24 06:45:47.874688 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Nov 24 06:45:47.874695 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Nov 24 06:45:47.874702 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Nov 24 06:45:47.874710 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Nov 24 06:45:47.874717 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Nov 24 06:45:47.874724 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Nov 24 06:45:47.874731 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Nov 24 06:45:47.874743 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Nov 24 06:45:47.874750 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 24 06:45:47.874758 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 24 06:45:47.874765 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 24 06:45:47.874772 kernel: efi: EFI v2.7 by EDK II Nov 24 06:45:47.874780 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Nov 24 06:45:47.874789 kernel: random: crng init done Nov 24 06:45:47.874797 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 24 06:45:47.874804 kernel: secureboot: Secure boot enabled Nov 24 06:45:47.874811 kernel: SMBIOS 2.8 present. Nov 24 06:45:47.874819 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 24 06:45:47.874826 kernel: DMI: Memory slots populated: 1/1 Nov 24 06:45:47.874833 kernel: Hypervisor detected: KVM Nov 24 06:45:47.874841 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 24 06:45:47.874848 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 06:45:47.874855 kernel: kvm-clock: using sched offset of 5320756824 cycles Nov 24 06:45:47.874863 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 06:45:47.874871 kernel: tsc: Detected 2794.750 MHz processor Nov 24 06:45:47.874881 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 06:45:47.874888 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 06:45:47.874896 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Nov 24 06:45:47.874903 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 24 06:45:47.874911 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 06:45:47.874918 kernel: Using GB pages for direct mapping Nov 24 06:45:47.874926 kernel: ACPI: Early table checksum verification disabled Nov 24 06:45:47.874933 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Nov 24 06:45:47.874941 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 24 06:45:47.874950 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:47.874958 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:47.874965 kernel: ACPI: FACS 0x000000009BBDD000 000040 Nov 24 06:45:47.874973 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:47.874980 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:47.874988 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:47.874995 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:47.875003 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 24 06:45:47.875012 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Nov 24 06:45:47.875020 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Nov 24 06:45:47.875027 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Nov 24 06:45:47.875035 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Nov 24 06:45:47.875042 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Nov 24 06:45:47.875050 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Nov 24 06:45:47.875057 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Nov 24 06:45:47.875064 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Nov 24 06:45:47.875072 kernel: No NUMA configuration found Nov 24 06:45:47.875079 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Nov 24 06:45:47.875089 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Nov 24 06:45:47.875097 kernel: Zone ranges: Nov 24 06:45:47.875104 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 06:45:47.875112 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Nov 24 06:45:47.875119 kernel: Normal empty Nov 24 06:45:47.875126 kernel: Device empty Nov 24 06:45:47.875134 kernel: Movable zone start for each node Nov 24 06:45:47.875141 kernel: Early memory node ranges Nov 24 06:45:47.875149 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Nov 24 06:45:47.875158 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Nov 24 06:45:47.875165 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Nov 24 06:45:47.875181 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Nov 24 06:45:47.875188 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Nov 24 06:45:47.875196 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Nov 24 06:45:47.875204 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 06:45:47.875212 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Nov 24 06:45:47.875219 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 24 06:45:47.875227 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 24 06:45:47.875236 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 24 06:45:47.875244 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Nov 24 06:45:47.875251 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 24 06:45:47.875258 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 06:45:47.875266 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 06:45:47.875273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 24 06:45:47.875281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 06:45:47.875300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 06:45:47.875308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 06:45:47.875318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 06:45:47.875326 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 06:45:47.875333 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 06:45:47.875341 kernel: TSC deadline timer available Nov 24 06:45:47.875348 kernel: CPU topo: Max. logical packages: 1 Nov 24 06:45:47.875356 kernel: CPU topo: Max. logical dies: 1 Nov 24 06:45:47.875370 kernel: CPU topo: Max. dies per package: 1 Nov 24 06:45:47.875379 kernel: CPU topo: Max. threads per core: 1 Nov 24 06:45:47.875387 kernel: CPU topo: Num. cores per package: 4 Nov 24 06:45:47.875394 kernel: CPU topo: Num. threads per package: 4 Nov 24 06:45:47.875402 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 24 06:45:47.875410 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 06:45:47.875417 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 24 06:45:47.875427 kernel: kvm-guest: setup PV sched yield Nov 24 06:45:47.875435 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 24 06:45:47.875443 kernel: Booting paravirtualized kernel on KVM Nov 24 06:45:47.875451 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 06:45:47.875459 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 24 06:45:47.875468 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 24 06:45:47.875476 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 24 06:45:47.875484 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 24 06:45:47.875491 kernel: kvm-guest: PV spinlocks enabled Nov 24 06:45:47.875499 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 06:45:47.875508 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 06:45:47.875516 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 24 06:45:47.875524 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 06:45:47.875534 kernel: Fallback order for Node 0: 0 Nov 24 06:45:47.875542 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Nov 24 06:45:47.875550 kernel: Policy zone: DMA32 Nov 24 06:45:47.875557 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 06:45:47.875565 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 24 06:45:47.875573 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 06:45:47.875581 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 06:45:47.875588 kernel: Dynamic Preempt: voluntary Nov 24 06:45:47.875596 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 06:45:47.875606 kernel: rcu: RCU event tracing is enabled. Nov 24 06:45:47.875614 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 24 06:45:47.875622 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 06:45:47.875630 kernel: Rude variant of Tasks RCU enabled. Nov 24 06:45:47.875638 kernel: Tracing variant of Tasks RCU enabled. Nov 24 06:45:47.875646 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 06:45:47.875653 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 24 06:45:47.875661 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 06:45:47.875669 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 06:45:47.875679 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 06:45:47.875687 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 24 06:45:47.875695 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 06:45:47.875703 kernel: Console: colour dummy device 80x25 Nov 24 06:45:47.875711 kernel: printk: legacy console [ttyS0] enabled Nov 24 06:45:47.875719 kernel: ACPI: Core revision 20240827 Nov 24 06:45:47.875727 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 24 06:45:47.875734 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 06:45:47.875742 kernel: x2apic enabled Nov 24 06:45:47.875752 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 06:45:47.875760 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 24 06:45:47.875768 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 24 06:45:47.875786 kernel: kvm-guest: setup PV IPIs Nov 24 06:45:47.875802 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 24 06:45:47.875818 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 24 06:45:47.875841 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 24 06:45:47.875849 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 24 06:45:47.875857 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 24 06:45:47.875868 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 24 06:45:47.875877 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 06:45:47.875884 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 06:45:47.875901 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 06:45:47.875921 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 24 06:45:47.875937 kernel: active return thunk: retbleed_return_thunk Nov 24 06:45:47.875945 kernel: RETBleed: Mitigation: untrained return thunk Nov 24 06:45:47.875953 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 24 06:45:47.875961 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 24 06:45:47.875971 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 24 06:45:47.875980 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 24 06:45:47.875987 kernel: active return thunk: srso_return_thunk Nov 24 06:45:47.875995 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 24 06:45:47.876003 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 06:45:47.876011 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 06:45:47.876019 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 06:45:47.876026 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 06:45:47.876036 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 24 06:45:47.876044 kernel: Freeing SMP alternatives memory: 32K Nov 24 06:45:47.876052 kernel: pid_max: default: 32768 minimum: 301 Nov 24 06:45:47.876059 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 06:45:47.876067 kernel: landlock: Up and running. Nov 24 06:45:47.876075 kernel: SELinux: Initializing. Nov 24 06:45:47.876083 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 06:45:47.876091 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 06:45:47.876099 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 24 06:45:47.876108 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 24 06:45:47.876116 kernel: ... version: 0 Nov 24 06:45:47.876124 kernel: ... bit width: 48 Nov 24 06:45:47.876132 kernel: ... generic registers: 6 Nov 24 06:45:47.876139 kernel: ... value mask: 0000ffffffffffff Nov 24 06:45:47.876147 kernel: ... max period: 00007fffffffffff Nov 24 06:45:47.876155 kernel: ... fixed-purpose events: 0 Nov 24 06:45:47.876163 kernel: ... event mask: 000000000000003f Nov 24 06:45:47.876179 kernel: signal: max sigframe size: 1776 Nov 24 06:45:47.876189 kernel: rcu: Hierarchical SRCU implementation. Nov 24 06:45:47.876197 kernel: rcu: Max phase no-delay instances is 400. Nov 24 06:45:47.876205 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 06:45:47.876213 kernel: smp: Bringing up secondary CPUs ... Nov 24 06:45:47.876221 kernel: smpboot: x86: Booting SMP configuration: Nov 24 06:45:47.876228 kernel: .... node #0, CPUs: #1 #2 #3 Nov 24 06:45:47.876236 kernel: smp: Brought up 1 node, 4 CPUs Nov 24 06:45:47.876244 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 24 06:45:47.876252 kernel: Memory: 2401020K/2552216K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 145256K reserved, 0K cma-reserved) Nov 24 06:45:47.876262 kernel: devtmpfs: initialized Nov 24 06:45:47.876269 kernel: x86/mm: Memory block size: 128MB Nov 24 06:45:47.876277 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Nov 24 06:45:47.876298 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Nov 24 06:45:47.876306 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 06:45:47.876314 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 24 06:45:47.876322 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 06:45:47.876329 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 06:45:47.876337 kernel: audit: initializing netlink subsys (disabled) Nov 24 06:45:47.876347 kernel: audit: type=2000 audit(1763966745.718:1): state=initialized audit_enabled=0 res=1 Nov 24 06:45:47.876354 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 06:45:47.876362 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 06:45:47.876370 kernel: cpuidle: using governor menu Nov 24 06:45:47.876378 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 06:45:47.876386 kernel: dca service started, version 1.12.1 Nov 24 06:45:47.876394 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 24 06:45:47.876401 kernel: PCI: Using configuration type 1 for base access Nov 24 06:45:47.876409 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 06:45:47.876419 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 06:45:47.876427 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 06:45:47.876435 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 06:45:47.876443 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 06:45:47.876450 kernel: ACPI: Added _OSI(Module Device) Nov 24 06:45:47.876458 kernel: ACPI: Added _OSI(Processor Device) Nov 24 06:45:47.876466 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 06:45:47.876474 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 06:45:47.876481 kernel: ACPI: Interpreter enabled Nov 24 06:45:47.876491 kernel: ACPI: PM: (supports S0 S5) Nov 24 06:45:47.876499 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 06:45:47.876507 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 06:45:47.876515 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 06:45:47.876522 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 24 06:45:47.876530 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 06:45:47.876701 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 24 06:45:47.876822 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 24 06:45:47.876941 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 24 06:45:47.876951 kernel: PCI host bridge to bus 0000:00 Nov 24 06:45:47.877075 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 06:45:47.877199 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 06:45:47.877320 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 06:45:47.877430 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 24 06:45:47.877535 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 24 06:45:47.877646 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 24 06:45:47.877751 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 06:45:47.877893 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 24 06:45:47.878043 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 24 06:45:47.878244 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 24 06:45:47.878389 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 24 06:45:47.878540 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 24 06:45:47.878693 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 06:45:47.878824 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 24 06:45:47.878941 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 24 06:45:47.879057 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 24 06:45:47.879181 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 24 06:45:47.879330 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 24 06:45:47.879494 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 24 06:45:47.879614 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 24 06:45:47.879738 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 24 06:45:47.879917 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 06:45:47.880043 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 24 06:45:47.880161 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 24 06:45:47.880313 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 24 06:45:47.880437 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 24 06:45:47.880560 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 24 06:45:47.880675 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 24 06:45:47.880887 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 24 06:45:47.881007 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 24 06:45:47.881233 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 24 06:45:47.881413 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 24 06:45:47.881565 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 24 06:45:47.881577 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 06:45:47.881585 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 06:45:47.881593 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 06:45:47.881601 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 06:45:47.881609 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 24 06:45:47.881617 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 24 06:45:47.881625 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 24 06:45:47.881636 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 24 06:45:47.881644 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 24 06:45:47.881652 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 24 06:45:47.881660 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 24 06:45:47.881670 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 24 06:45:47.881680 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 24 06:45:47.881690 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 24 06:45:47.881701 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 24 06:45:47.881712 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 24 06:45:47.881725 kernel: iommu: Default domain type: Translated Nov 24 06:45:47.881736 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 06:45:47.881747 kernel: efivars: Registered efivars operations Nov 24 06:45:47.881758 kernel: PCI: Using ACPI for IRQ routing Nov 24 06:45:47.881768 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 06:45:47.881779 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Nov 24 06:45:47.881789 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Nov 24 06:45:47.881800 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Nov 24 06:45:47.881810 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Nov 24 06:45:47.881822 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Nov 24 06:45:47.881944 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 24 06:45:47.882060 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 24 06:45:47.882189 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 06:45:47.882200 kernel: vgaarb: loaded Nov 24 06:45:47.882208 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 24 06:45:47.882216 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 24 06:45:47.882224 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 06:45:47.882235 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 06:45:47.882243 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 06:45:47.882251 kernel: pnp: PnP ACPI init Nov 24 06:45:47.882394 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 24 06:45:47.882406 kernel: pnp: PnP ACPI: found 6 devices Nov 24 06:45:47.882414 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 06:45:47.882422 kernel: NET: Registered PF_INET protocol family Nov 24 06:45:47.882430 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 24 06:45:47.882441 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 24 06:45:47.882449 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 06:45:47.882457 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 06:45:47.882465 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 24 06:45:47.882473 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 24 06:45:47.882481 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 06:45:47.882489 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 06:45:47.882497 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 06:45:47.882504 kernel: NET: Registered PF_XDP protocol family Nov 24 06:45:47.882624 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 24 06:45:47.882741 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 24 06:45:47.882847 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 06:45:47.882953 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 06:45:47.883081 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 06:45:47.883220 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 24 06:45:47.883425 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 24 06:45:47.883532 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 24 06:45:47.883546 kernel: PCI: CLS 0 bytes, default 64 Nov 24 06:45:47.883555 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 24 06:45:47.883563 kernel: Initialise system trusted keyrings Nov 24 06:45:47.883571 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 24 06:45:47.883579 kernel: Key type asymmetric registered Nov 24 06:45:47.883587 kernel: Asymmetric key parser 'x509' registered Nov 24 06:45:47.883611 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 06:45:47.883625 kernel: io scheduler mq-deadline registered Nov 24 06:45:47.883637 kernel: io scheduler kyber registered Nov 24 06:45:47.883650 kernel: io scheduler bfq registered Nov 24 06:45:47.883661 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 06:45:47.883672 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 24 06:45:47.883684 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 24 06:45:47.883695 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 24 06:45:47.883706 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 06:45:47.883718 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 06:45:47.883729 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 06:45:47.883740 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 06:45:47.883754 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 06:45:47.883765 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 06:45:47.883890 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 24 06:45:47.884008 kernel: rtc_cmos 00:04: registered as rtc0 Nov 24 06:45:47.884144 kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T06:45:47 UTC (1763966747) Nov 24 06:45:47.884301 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 24 06:45:47.884314 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 24 06:45:47.884326 kernel: efifb: probing for efifb Nov 24 06:45:47.884334 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 24 06:45:47.884343 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 24 06:45:47.884351 kernel: efifb: scrolling: redraw Nov 24 06:45:47.884359 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 24 06:45:47.884367 kernel: Console: switching to colour frame buffer device 160x50 Nov 24 06:45:47.884379 kernel: fb0: EFI VGA frame buffer device Nov 24 06:45:47.884387 kernel: pstore: Using crash dump compression: deflate Nov 24 06:45:47.884395 kernel: pstore: Registered efi_pstore as persistent store backend Nov 24 06:45:47.884403 kernel: NET: Registered PF_INET6 protocol family Nov 24 06:45:47.884411 kernel: Segment Routing with IPv6 Nov 24 06:45:47.884420 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 06:45:47.884428 kernel: NET: Registered PF_PACKET protocol family Nov 24 06:45:47.884436 kernel: Key type dns_resolver registered Nov 24 06:45:47.884444 kernel: IPI shorthand broadcast: enabled Nov 24 06:45:47.884454 kernel: sched_clock: Marking stable (2804003602, 250792596)->(3103727939, -48931741) Nov 24 06:45:47.884462 kernel: registered taskstats version 1 Nov 24 06:45:47.884470 kernel: Loading compiled-in X.509 certificates Nov 24 06:45:47.884479 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 06:45:47.884487 kernel: Demotion targets for Node 0: null Nov 24 06:45:47.884495 kernel: Key type .fscrypt registered Nov 24 06:45:47.884503 kernel: Key type fscrypt-provisioning registered Nov 24 06:45:47.884511 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 06:45:47.884519 kernel: ima: Allocated hash algorithm: sha1 Nov 24 06:45:47.884529 kernel: ima: No architecture policies found Nov 24 06:45:47.884537 kernel: clk: Disabling unused clocks Nov 24 06:45:47.884545 kernel: Warning: unable to open an initial console. Nov 24 06:45:47.884554 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 06:45:47.884562 kernel: Write protecting the kernel read-only data: 40960k Nov 24 06:45:47.884570 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 06:45:47.884578 kernel: Run /init as init process Nov 24 06:45:47.884586 kernel: with arguments: Nov 24 06:45:47.884594 kernel: /init Nov 24 06:45:47.884605 kernel: with environment: Nov 24 06:45:47.884615 kernel: HOME=/ Nov 24 06:45:47.884623 kernel: TERM=linux Nov 24 06:45:47.884632 systemd[1]: Successfully made /usr/ read-only. Nov 24 06:45:47.884643 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 06:45:47.884653 systemd[1]: Detected virtualization kvm. Nov 24 06:45:47.884661 systemd[1]: Detected architecture x86-64. Nov 24 06:45:47.884671 systemd[1]: Running in initrd. Nov 24 06:45:47.884679 systemd[1]: No hostname configured, using default hostname. Nov 24 06:45:47.884688 systemd[1]: Hostname set to . Nov 24 06:45:47.884696 systemd[1]: Initializing machine ID from VM UUID. Nov 24 06:45:47.884705 systemd[1]: Queued start job for default target initrd.target. Nov 24 06:45:47.884713 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 06:45:47.884722 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 06:45:47.884731 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 06:45:47.884741 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 06:45:47.884750 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 06:45:47.884760 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 06:45:47.884769 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 06:45:47.884778 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 06:45:47.884786 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 06:45:47.884795 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 06:45:47.884805 systemd[1]: Reached target paths.target - Path Units. Nov 24 06:45:47.884816 systemd[1]: Reached target slices.target - Slice Units. Nov 24 06:45:47.884827 systemd[1]: Reached target swap.target - Swaps. Nov 24 06:45:47.884838 systemd[1]: Reached target timers.target - Timer Units. Nov 24 06:45:47.884850 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 06:45:47.884861 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 06:45:47.884872 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 06:45:47.884883 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 06:45:47.884894 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 06:45:47.884909 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 06:45:47.884917 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 06:45:47.884926 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 06:45:47.884934 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 06:45:47.884943 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 06:45:47.884951 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 06:45:47.884960 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 06:45:47.884969 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 06:45:47.884980 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 06:45:47.884988 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 06:45:47.884997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:45:47.885005 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 06:45:47.885014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 06:45:47.885025 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 06:45:47.885055 systemd-journald[200]: Collecting audit messages is disabled. Nov 24 06:45:47.885075 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 06:45:47.885086 systemd-journald[200]: Journal started Nov 24 06:45:47.885105 systemd-journald[200]: Runtime Journal (/run/log/journal/a8948991c6cc493ba8342b05a326256b) is 5.9M, max 47.9M, 41.9M free. Nov 24 06:45:47.878629 systemd-modules-load[202]: Inserted module 'overlay' Nov 24 06:45:47.889407 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 06:45:47.887509 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 06:45:47.895461 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 06:45:47.896869 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 06:45:47.912311 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 06:45:47.911192 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 06:45:47.915682 kernel: Bridge firewalling registered Nov 24 06:45:47.912780 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:45:47.913156 systemd-modules-load[202]: Inserted module 'br_netfilter' Nov 24 06:45:47.916557 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 06:45:47.917180 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 06:45:47.917984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 06:45:47.926753 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 06:45:47.934925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 06:45:47.948482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 06:45:47.951220 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 06:45:47.963591 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 06:45:47.966481 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 06:45:47.988926 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 06:45:47.991551 systemd-resolved[234]: Positive Trust Anchors: Nov 24 06:45:47.991560 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 06:45:47.991588 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 06:45:47.993892 systemd-resolved[234]: Defaulting to hostname 'linux'. Nov 24 06:45:47.994851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 06:45:47.997935 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 06:45:48.104319 kernel: SCSI subsystem initialized Nov 24 06:45:48.113317 kernel: Loading iSCSI transport class v2.0-870. Nov 24 06:45:48.124319 kernel: iscsi: registered transport (tcp) Nov 24 06:45:48.145724 kernel: iscsi: registered transport (qla4xxx) Nov 24 06:45:48.145764 kernel: QLogic iSCSI HBA Driver Nov 24 06:45:48.166080 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 06:45:48.198942 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 06:45:48.201224 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 06:45:48.255009 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 06:45:48.257176 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 06:45:48.314333 kernel: raid6: avx2x4 gen() 29310 MB/s Nov 24 06:45:48.331312 kernel: raid6: avx2x2 gen() 30405 MB/s Nov 24 06:45:48.349047 kernel: raid6: avx2x1 gen() 25407 MB/s Nov 24 06:45:48.349063 kernel: raid6: using algorithm avx2x2 gen() 30405 MB/s Nov 24 06:45:48.367057 kernel: raid6: .... xor() 19895 MB/s, rmw enabled Nov 24 06:45:48.367073 kernel: raid6: using avx2x2 recovery algorithm Nov 24 06:45:48.387315 kernel: xor: automatically using best checksumming function avx Nov 24 06:45:48.553323 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 06:45:48.561757 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 06:45:48.564306 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 06:45:48.594610 systemd-udevd[452]: Using default interface naming scheme 'v255'. Nov 24 06:45:48.600255 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 06:45:48.601720 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 06:45:48.625522 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Nov 24 06:45:48.654237 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 06:45:48.656251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 06:45:48.739690 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 06:45:48.744806 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 06:45:48.772314 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 24 06:45:48.779564 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 24 06:45:48.784592 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 06:45:48.784611 kernel: GPT:9289727 != 19775487 Nov 24 06:45:48.784622 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 06:45:48.784632 kernel: GPT:9289727 != 19775487 Nov 24 06:45:48.785661 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 06:45:48.787150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:45:48.797318 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 06:45:48.799308 kernel: libata version 3.00 loaded. Nov 24 06:45:48.805311 kernel: AES CTR mode by8 optimization enabled Nov 24 06:45:48.810561 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 24 06:45:48.815118 kernel: ahci 0000:00:1f.2: version 3.0 Nov 24 06:45:48.815373 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 24 06:45:48.819306 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 24 06:45:48.819483 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 24 06:45:48.819632 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 24 06:45:48.832700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 06:45:48.834198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:45:48.838975 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:45:48.844469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:45:48.848752 kernel: scsi host0: ahci Nov 24 06:45:48.857830 kernel: scsi host1: ahci Nov 24 06:45:48.858004 kernel: scsi host2: ahci Nov 24 06:45:48.858840 kernel: scsi host3: ahci Nov 24 06:45:48.859140 kernel: scsi host4: ahci Nov 24 06:45:48.860394 kernel: scsi host5: ahci Nov 24 06:45:48.860593 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Nov 24 06:45:48.861588 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 24 06:45:48.872715 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Nov 24 06:45:48.872732 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Nov 24 06:45:48.872743 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Nov 24 06:45:48.872758 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Nov 24 06:45:48.872769 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Nov 24 06:45:48.880887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 06:45:48.896993 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 24 06:45:48.918825 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 24 06:45:48.919471 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 24 06:45:48.928098 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 06:45:48.928729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 06:45:48.928780 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:45:48.935036 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:45:48.946894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:45:48.948944 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 06:45:48.956416 disk-uuid[615]: Primary Header is updated. Nov 24 06:45:48.956416 disk-uuid[615]: Secondary Entries is updated. Nov 24 06:45:48.956416 disk-uuid[615]: Secondary Header is updated. Nov 24 06:45:48.962043 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:45:48.963325 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:45:48.969894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:45:49.180827 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 24 06:45:49.180887 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 24 06:45:49.181955 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 24 06:45:49.182329 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 24 06:45:49.185332 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 24 06:45:49.185385 kernel: ata3.00: LPM support broken, forcing max_power Nov 24 06:45:49.186530 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 24 06:45:49.186542 kernel: ata3.00: applying bridge limits Nov 24 06:45:49.189325 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 24 06:45:49.189339 kernel: ata3.00: LPM support broken, forcing max_power Nov 24 06:45:49.190432 kernel: ata3.00: configured for UDMA/100 Nov 24 06:45:49.191325 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 24 06:45:49.254756 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 24 06:45:49.254965 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 24 06:45:49.269360 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 24 06:45:49.586820 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 06:45:49.591937 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 06:45:49.596548 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 06:45:49.600918 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 06:45:49.606023 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 06:45:49.631953 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 06:45:49.965988 disk-uuid[617]: The operation has completed successfully. Nov 24 06:45:49.967637 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:45:49.993254 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 06:45:49.993388 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 06:45:50.026670 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 06:45:50.050153 sh[649]: Success Nov 24 06:45:50.069559 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 06:45:50.069606 kernel: device-mapper: uevent: version 1.0.3 Nov 24 06:45:50.071491 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 06:45:50.082326 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 24 06:45:50.111343 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 06:45:50.115007 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 06:45:50.131306 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 06:45:50.137545 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (661) Nov 24 06:45:50.141082 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 06:45:50.141114 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:45:50.146113 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 06:45:50.146132 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 06:45:50.147390 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 06:45:50.150585 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 06:45:50.151334 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 06:45:50.152134 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 06:45:50.155446 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 06:45:50.190336 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (695) Nov 24 06:45:50.193774 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:45:50.193805 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:45:50.197811 kernel: BTRFS info (device vda6): turning on async discard Nov 24 06:45:50.197865 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 06:45:50.204862 kernel: BTRFS info (device vda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:45:50.204999 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 06:45:50.207898 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 06:45:50.291929 ignition[743]: Ignition 2.22.0 Nov 24 06:45:50.291941 ignition[743]: Stage: fetch-offline Nov 24 06:45:50.291968 ignition[743]: no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:50.291977 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:50.292043 ignition[743]: parsed url from cmdline: "" Nov 24 06:45:50.292047 ignition[743]: no config URL provided Nov 24 06:45:50.292052 ignition[743]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 06:45:50.292059 ignition[743]: no config at "/usr/lib/ignition/user.ign" Nov 24 06:45:50.300674 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 06:45:50.292079 ignition[743]: op(1): [started] loading QEMU firmware config module Nov 24 06:45:50.292084 ignition[743]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 24 06:45:50.308684 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 06:45:50.301496 ignition[743]: op(1): [finished] loading QEMU firmware config module Nov 24 06:45:50.378856 systemd-networkd[840]: lo: Link UP Nov 24 06:45:50.378864 systemd-networkd[840]: lo: Gained carrier Nov 24 06:45:50.380341 systemd-networkd[840]: Enumeration completed Nov 24 06:45:50.380747 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 06:45:50.380752 systemd-networkd[840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 06:45:50.381045 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 06:45:50.383156 systemd-networkd[840]: eth0: Link UP Nov 24 06:45:50.383337 systemd-networkd[840]: eth0: Gained carrier Nov 24 06:45:50.383348 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 06:45:50.384416 systemd[1]: Reached target network.target - Network. Nov 24 06:45:50.407173 ignition[743]: parsing config with SHA512: 10502189e0fb9a38a440999970b5f18f0b5c12d45d69108f4f18028a86b5f37380365b562ca45a1caf26b474a4448b9bea50b9590ee752f4e6954907ff3c2d4a Nov 24 06:45:50.411440 unknown[743]: fetched base config from "system" Nov 24 06:45:50.411450 unknown[743]: fetched user config from "qemu" Nov 24 06:45:50.411752 ignition[743]: fetch-offline: fetch-offline passed Nov 24 06:45:50.414794 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 06:45:50.411799 ignition[743]: Ignition finished successfully Nov 24 06:45:50.416083 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 24 06:45:50.416364 systemd-networkd[840]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 24 06:45:50.416962 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 06:45:50.455989 ignition[844]: Ignition 2.22.0 Nov 24 06:45:50.456000 ignition[844]: Stage: kargs Nov 24 06:45:50.456134 ignition[844]: no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:50.456143 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:50.456924 ignition[844]: kargs: kargs passed Nov 24 06:45:50.456966 ignition[844]: Ignition finished successfully Nov 24 06:45:50.463698 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 06:45:50.468603 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 06:45:50.497007 ignition[853]: Ignition 2.22.0 Nov 24 06:45:50.497018 ignition[853]: Stage: disks Nov 24 06:45:50.497156 ignition[853]: no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:50.497166 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:50.497865 ignition[853]: disks: disks passed Nov 24 06:45:50.497908 ignition[853]: Ignition finished successfully Nov 24 06:45:50.503183 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 06:45:50.505989 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 06:45:50.508952 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 06:45:50.512503 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 06:45:50.515814 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 06:45:50.518816 systemd[1]: Reached target basic.target - Basic System. Nov 24 06:45:50.522562 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 06:45:50.547776 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 06:45:50.547885 systemd-resolved[234]: Detected conflict on linux IN A 10.0.0.28 Nov 24 06:45:50.547893 systemd-resolved[234]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Nov 24 06:45:50.555485 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 06:45:50.556968 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 06:45:50.663325 kernel: EXT4-fs (vda9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 06:45:50.663608 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 06:45:50.664779 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 06:45:50.667951 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 06:45:50.672451 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 06:45:50.675910 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 24 06:45:50.675959 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 06:45:50.675983 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 06:45:50.693996 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 06:45:50.699315 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Nov 24 06:45:50.700191 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 06:45:50.704264 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:45:50.704300 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:45:50.709128 kernel: BTRFS info (device vda6): turning on async discard Nov 24 06:45:50.709145 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 06:45:50.711070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 06:45:50.745015 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 06:45:50.749086 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Nov 24 06:45:50.754151 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 06:45:50.759015 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 06:45:50.835724 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 06:45:50.837597 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 06:45:50.840441 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 06:45:50.866320 kernel: BTRFS info (device vda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:45:50.877426 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 06:45:50.895908 ignition[985]: INFO : Ignition 2.22.0 Nov 24 06:45:50.895908 ignition[985]: INFO : Stage: mount Nov 24 06:45:50.898816 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:50.898816 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:50.898816 ignition[985]: INFO : mount: mount passed Nov 24 06:45:50.898816 ignition[985]: INFO : Ignition finished successfully Nov 24 06:45:50.899778 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 06:45:50.903251 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 06:45:51.140055 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 06:45:51.142519 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 06:45:51.171086 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Nov 24 06:45:51.171121 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:45:51.172805 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:45:51.176779 kernel: BTRFS info (device vda6): turning on async discard Nov 24 06:45:51.176806 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 06:45:51.178568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 06:45:51.218770 ignition[1015]: INFO : Ignition 2.22.0 Nov 24 06:45:51.218770 ignition[1015]: INFO : Stage: files Nov 24 06:45:51.222188 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:51.222188 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:51.222188 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Nov 24 06:45:51.222188 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 06:45:51.222188 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 06:45:51.234038 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 06:45:51.234038 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 06:45:51.234038 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 06:45:51.234038 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 06:45:51.234038 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 06:45:51.225171 unknown[1015]: wrote ssh authorized keys file for user: core Nov 24 06:45:51.268247 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 06:45:51.337693 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 06:45:51.341548 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 06:45:51.341548 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 06:45:51.341548 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 06:45:51.341548 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 06:45:51.341548 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 06:45:51.341548 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 06:45:51.341548 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 06:45:51.341548 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 06:45:51.365065 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 06:45:51.365065 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 06:45:51.365065 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 06:45:51.365065 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 06:45:51.365065 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 06:45:51.365065 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 24 06:45:51.764429 systemd-networkd[840]: eth0: Gained IPv6LL Nov 24 06:45:51.770122 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 06:45:52.250432 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 06:45:52.250432 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 06:45:52.256538 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 06:45:52.262600 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 06:45:52.262600 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 06:45:52.262600 ignition[1015]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 24 06:45:52.271156 ignition[1015]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 24 06:45:52.271156 ignition[1015]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 24 06:45:52.271156 ignition[1015]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 24 06:45:52.271156 ignition[1015]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 24 06:45:52.295651 ignition[1015]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 24 06:45:52.301149 ignition[1015]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 24 06:45:52.303792 ignition[1015]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 24 06:45:52.303792 ignition[1015]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 24 06:45:52.303792 ignition[1015]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 06:45:52.303792 ignition[1015]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 06:45:52.303792 ignition[1015]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 06:45:52.303792 ignition[1015]: INFO : files: files passed Nov 24 06:45:52.303792 ignition[1015]: INFO : Ignition finished successfully Nov 24 06:45:52.314681 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 06:45:52.319467 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 06:45:52.321644 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 06:45:52.338373 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 06:45:52.338541 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 06:45:52.344909 initrd-setup-root-after-ignition[1045]: grep: /sysroot/oem/oem-release: No such file or directory Nov 24 06:45:52.349725 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 06:45:52.349725 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 06:45:52.357071 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 06:45:52.352504 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 06:45:52.353637 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 06:45:52.358617 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 06:45:52.421700 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 06:45:52.421843 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 06:45:52.423206 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 06:45:52.427705 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 06:45:52.430803 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 06:45:52.433336 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 06:45:52.468386 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 06:45:52.470181 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 06:45:52.493016 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 06:45:52.493710 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 06:45:52.497189 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 06:45:52.500714 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 06:45:52.500820 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 06:45:52.506068 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 06:45:52.506909 systemd[1]: Stopped target basic.target - Basic System. Nov 24 06:45:52.511699 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 06:45:52.514102 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 06:45:52.517704 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 06:45:52.522010 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 06:45:52.525407 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 06:45:52.526314 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 06:45:52.530906 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 06:45:52.534358 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 06:45:52.537390 systemd[1]: Stopped target swap.target - Swaps. Nov 24 06:45:52.540333 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 06:45:52.540434 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 06:45:52.545211 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 06:45:52.546070 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 06:45:52.550729 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 06:45:52.550825 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 06:45:52.554009 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 06:45:52.554118 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 06:45:52.560531 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 06:45:52.560644 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 06:45:52.563812 systemd[1]: Stopped target paths.target - Path Units. Nov 24 06:45:52.566649 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 06:45:52.571623 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 06:45:52.572351 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 06:45:52.580052 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 06:45:52.580845 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 06:45:52.581017 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 06:45:52.583884 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 06:45:52.584053 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 06:45:52.586877 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 06:45:52.587090 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 06:45:52.589938 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 06:45:52.590135 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 06:45:52.597603 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 06:45:52.598736 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 06:45:52.598915 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 06:45:52.613843 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 06:45:52.615349 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 06:45:52.615490 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 06:45:52.616413 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 06:45:52.616593 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 06:45:52.629210 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 06:45:52.636462 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 06:45:52.640688 ignition[1071]: INFO : Ignition 2.22.0 Nov 24 06:45:52.640688 ignition[1071]: INFO : Stage: umount Nov 24 06:45:52.640688 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:52.640688 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:52.640688 ignition[1071]: INFO : umount: umount passed Nov 24 06:45:52.640688 ignition[1071]: INFO : Ignition finished successfully Nov 24 06:45:52.642903 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 06:45:52.643032 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 06:45:52.644278 systemd[1]: Stopped target network.target - Network. Nov 24 06:45:52.649768 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 06:45:52.649836 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 06:45:52.652651 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 06:45:52.652706 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 06:45:52.655603 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 06:45:52.655661 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 06:45:52.658647 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 06:45:52.658714 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 06:45:52.661906 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 06:45:52.664665 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 06:45:52.666763 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 06:45:52.675003 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 06:45:52.675151 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 06:45:52.679876 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 06:45:52.680313 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 06:45:52.680390 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 06:45:52.688190 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 06:45:52.688436 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 06:45:52.688573 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 06:45:52.691160 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 06:45:52.691666 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 06:45:52.694132 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 06:45:52.694182 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 06:45:52.703165 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 06:45:52.703916 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 06:45:52.703979 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 06:45:52.708807 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 06:45:52.708860 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 06:45:52.715003 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 06:45:52.715097 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 06:45:52.718278 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 06:45:52.721918 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 06:45:52.738786 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 06:45:52.738917 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 06:45:52.740457 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 06:45:52.740623 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 06:45:52.750530 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 06:45:52.750612 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 06:45:52.752809 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 06:45:52.752851 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 06:45:52.756838 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 06:45:52.756904 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 06:45:52.760908 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 06:45:52.760966 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 06:45:52.765137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 06:45:52.765197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 06:45:52.777977 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 06:45:52.780615 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 06:45:52.780683 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 06:45:52.787004 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 06:45:52.787075 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 06:45:52.792752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 06:45:52.792804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:45:52.803357 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 06:45:52.803510 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 06:45:52.814149 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 06:45:52.814328 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 06:45:52.815398 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 06:45:52.821512 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 06:45:52.821612 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 06:45:52.825839 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 06:45:52.851253 systemd[1]: Switching root. Nov 24 06:45:52.897537 systemd-journald[200]: Journal stopped Nov 24 06:45:54.128152 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Nov 24 06:45:54.128227 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 06:45:54.128241 kernel: SELinux: policy capability open_perms=1 Nov 24 06:45:54.128255 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 06:45:54.128266 kernel: SELinux: policy capability always_check_network=0 Nov 24 06:45:54.128282 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 06:45:54.128308 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 06:45:54.128319 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 06:45:54.128335 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 06:45:54.128347 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 06:45:54.128358 kernel: audit: type=1403 audit(1763966753.275:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 06:45:54.128373 systemd[1]: Successfully loaded SELinux policy in 62.092ms. Nov 24 06:45:54.128387 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.429ms. Nov 24 06:45:54.128400 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 06:45:54.128413 systemd[1]: Detected virtualization kvm. Nov 24 06:45:54.128424 systemd[1]: Detected architecture x86-64. Nov 24 06:45:54.128436 systemd[1]: Detected first boot. Nov 24 06:45:54.128454 systemd[1]: Initializing machine ID from VM UUID. Nov 24 06:45:54.128465 zram_generator::config[1118]: No configuration found. Nov 24 06:45:54.128478 kernel: Guest personality initialized and is inactive Nov 24 06:45:54.128492 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 06:45:54.128504 kernel: Initialized host personality Nov 24 06:45:54.128515 kernel: NET: Registered PF_VSOCK protocol family Nov 24 06:45:54.128527 systemd[1]: Populated /etc with preset unit settings. Nov 24 06:45:54.128539 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 06:45:54.128551 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 06:45:54.128564 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 06:45:54.128576 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 06:45:54.128594 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 06:45:54.128608 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 06:45:54.128620 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 06:45:54.128632 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 06:45:54.128645 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 06:45:54.128657 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 06:45:54.128669 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 06:45:54.128680 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 06:45:54.128693 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 06:45:54.128707 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 06:45:54.128720 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 06:45:54.128732 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 06:45:54.128744 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 06:45:54.128756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 06:45:54.128769 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 06:45:54.128781 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 06:45:54.128793 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 06:45:54.128806 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 06:45:54.128818 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 06:45:54.128830 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 06:45:54.128842 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 06:45:54.128854 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 06:45:54.128866 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 06:45:54.128878 systemd[1]: Reached target slices.target - Slice Units. Nov 24 06:45:54.128890 systemd[1]: Reached target swap.target - Swaps. Nov 24 06:45:54.128902 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 06:45:54.128919 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 06:45:54.128932 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 06:45:54.128945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 06:45:54.128958 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 06:45:54.128970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 06:45:54.128981 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 06:45:54.129001 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 06:45:54.129013 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 06:45:54.129024 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 06:45:54.129041 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:45:54.129052 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 06:45:54.129065 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 06:45:54.129077 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 06:45:54.129090 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 06:45:54.129101 systemd[1]: Reached target machines.target - Containers. Nov 24 06:45:54.129113 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 06:45:54.129125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:45:54.129139 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 06:45:54.129151 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 06:45:54.129163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 06:45:54.129175 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 06:45:54.129187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 06:45:54.129200 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 06:45:54.129212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 06:45:54.129224 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 06:45:54.129241 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 06:45:54.129255 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 06:45:54.129266 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 06:45:54.129279 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 06:45:54.129305 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:45:54.129319 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 06:45:54.129330 kernel: loop: module loaded Nov 24 06:45:54.129342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 06:45:54.129353 kernel: ACPI: bus type drm_connector registered Nov 24 06:45:54.129367 kernel: fuse: init (API version 7.41) Nov 24 06:45:54.129379 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 06:45:54.129391 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 06:45:54.129403 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 06:45:54.129434 systemd-journald[1196]: Collecting audit messages is disabled. Nov 24 06:45:54.129457 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 06:45:54.129471 systemd-journald[1196]: Journal started Nov 24 06:45:54.129494 systemd-journald[1196]: Runtime Journal (/run/log/journal/a8948991c6cc493ba8342b05a326256b) is 5.9M, max 47.9M, 41.9M free. Nov 24 06:45:53.833061 systemd[1]: Queued start job for default target multi-user.target. Nov 24 06:45:53.852183 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 24 06:45:53.852675 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 06:45:54.132211 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 06:45:54.132233 systemd[1]: Stopped verity-setup.service. Nov 24 06:45:54.136337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:45:54.142120 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 06:45:54.142856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 06:45:54.144600 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 06:45:54.146510 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 06:45:54.148177 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 06:45:54.150034 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 06:45:54.151900 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 06:45:54.153732 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 06:45:54.155919 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 06:45:54.158204 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 06:45:54.158428 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 06:45:54.160631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 06:45:54.160836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 06:45:54.162955 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 06:45:54.163170 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 06:45:54.165215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 06:45:54.165433 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 06:45:54.167943 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 06:45:54.168256 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 06:45:54.170311 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 06:45:54.170526 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 06:45:54.172644 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 06:45:54.174779 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 06:45:54.177150 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 06:45:54.179549 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 06:45:54.192499 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 06:45:54.195547 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 06:45:54.198342 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 06:45:54.200099 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 06:45:54.200190 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 06:45:54.202746 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 06:45:54.210258 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 06:45:54.212496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:45:54.213813 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 06:45:54.217081 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 06:45:54.219769 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 06:45:54.221612 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 06:45:54.223648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 06:45:54.225767 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 06:45:54.232131 systemd-journald[1196]: Time spent on flushing to /var/log/journal/a8948991c6cc493ba8342b05a326256b is 16.356ms for 1040 entries. Nov 24 06:45:54.232131 systemd-journald[1196]: System Journal (/var/log/journal/a8948991c6cc493ba8342b05a326256b) is 8M, max 195.6M, 187.6M free. Nov 24 06:45:54.254512 systemd-journald[1196]: Received client request to flush runtime journal. Nov 24 06:45:54.254544 kernel: loop0: detected capacity change from 0 to 110984 Nov 24 06:45:54.232308 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 06:45:54.240466 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 06:45:54.244982 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 06:45:54.247566 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 06:45:54.249955 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 06:45:54.252233 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 06:45:54.255744 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 06:45:54.258911 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 06:45:54.265984 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 06:45:54.270220 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 06:45:54.279313 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 06:45:54.288877 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 06:45:54.293042 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 06:45:54.298349 kernel: loop1: detected capacity change from 0 to 128560 Nov 24 06:45:54.304556 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 06:45:54.309524 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 06:45:54.328319 kernel: loop2: detected capacity change from 0 to 229808 Nov 24 06:45:54.330041 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Nov 24 06:45:54.330370 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Nov 24 06:45:54.338222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 06:45:54.360312 kernel: loop3: detected capacity change from 0 to 110984 Nov 24 06:45:54.371336 kernel: loop4: detected capacity change from 0 to 128560 Nov 24 06:45:54.381328 kernel: loop5: detected capacity change from 0 to 229808 Nov 24 06:45:54.391615 (sd-merge)[1260]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 24 06:45:54.393254 (sd-merge)[1260]: Merged extensions into '/usr'. Nov 24 06:45:54.399184 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 06:45:54.399200 systemd[1]: Reloading... Nov 24 06:45:54.455330 zram_generator::config[1284]: No configuration found. Nov 24 06:45:54.548277 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 06:45:54.641987 systemd[1]: Reloading finished in 242 ms. Nov 24 06:45:54.670053 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 06:45:54.672347 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 06:45:54.688715 systemd[1]: Starting ensure-sysext.service... Nov 24 06:45:54.691129 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 06:45:54.705274 systemd[1]: Reload requested from client PID 1323 ('systemctl') (unit ensure-sysext.service)... Nov 24 06:45:54.705304 systemd[1]: Reloading... Nov 24 06:45:54.710771 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 06:45:54.710807 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 06:45:54.711099 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 06:45:54.711430 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 06:45:54.712683 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 06:45:54.713028 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Nov 24 06:45:54.713115 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Nov 24 06:45:54.718117 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 06:45:54.718131 systemd-tmpfiles[1324]: Skipping /boot Nov 24 06:45:54.730621 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 06:45:54.730636 systemd-tmpfiles[1324]: Skipping /boot Nov 24 06:45:54.755316 zram_generator::config[1351]: No configuration found. Nov 24 06:45:54.925391 systemd[1]: Reloading finished in 219 ms. Nov 24 06:45:54.947981 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 06:45:54.977842 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 06:45:54.987198 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 06:45:54.990105 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 06:45:55.009759 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 06:45:55.017188 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 06:45:55.021251 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 06:45:55.025470 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 06:45:55.030371 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:45:55.030635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:45:55.036583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 06:45:55.041514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 06:45:55.045906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 06:45:55.048493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:45:55.048650 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:45:55.051410 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 06:45:55.053307 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:45:55.055095 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 06:45:55.058195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 06:45:55.058526 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 06:45:55.061004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 06:45:55.061222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 06:45:55.064889 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 06:45:55.065180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 06:45:55.075546 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:45:55.075789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:45:55.077163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 06:45:55.080355 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 06:45:55.080998 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Nov 24 06:45:55.083430 augenrules[1425]: No rules Nov 24 06:45:55.090453 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 06:45:55.092346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:45:55.092449 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:45:55.093739 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 06:45:55.095545 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:45:55.096875 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 06:45:55.097205 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 06:45:55.100395 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 06:45:55.103167 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 06:45:55.105848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 06:45:55.106111 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 06:45:55.108733 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 06:45:55.111268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 06:45:55.112835 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 06:45:55.115574 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 06:45:55.115779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 06:45:55.118345 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 06:45:55.126649 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 06:45:55.154647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:45:55.157247 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 06:45:55.159614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:45:55.161019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 06:45:55.170904 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 06:45:55.173953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 06:45:55.177103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 06:45:55.179271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:45:55.179341 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:45:55.181703 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 06:45:55.183455 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 06:45:55.183485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:45:55.184227 systemd[1]: Finished ensure-sysext.service. Nov 24 06:45:55.186220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 06:45:55.186621 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 06:45:55.197674 augenrules[1472]: /sbin/augenrules: No change Nov 24 06:45:55.203410 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 24 06:45:55.208559 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 06:45:55.208914 augenrules[1497]: No rules Nov 24 06:45:55.209341 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 06:45:55.211771 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 06:45:55.211987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 06:45:55.218174 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 06:45:55.218922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 06:45:55.221535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 06:45:55.221754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 06:45:55.227677 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 06:45:55.228422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 06:45:55.260023 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 06:45:55.289867 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 06:45:55.293466 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 06:45:55.322044 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 06:45:55.323493 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 24 06:45:55.324371 systemd-resolved[1393]: Positive Trust Anchors: Nov 24 06:45:55.324387 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 06:45:55.324416 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 06:45:55.325728 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 06:45:55.327998 systemd-resolved[1393]: Defaulting to hostname 'linux'. Nov 24 06:45:55.329430 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 06:45:55.331468 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 06:45:55.346316 kernel: ACPI: button: Power Button [PWRF] Nov 24 06:45:55.357839 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 24 06:45:55.358155 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 24 06:45:55.358349 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 24 06:45:55.358740 systemd-networkd[1478]: lo: Link UP Nov 24 06:45:55.358753 systemd-networkd[1478]: lo: Gained carrier Nov 24 06:45:55.360460 systemd-networkd[1478]: Enumeration completed Nov 24 06:45:55.360565 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 06:45:55.361852 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 06:45:55.362119 systemd-networkd[1478]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 06:45:55.362447 systemd[1]: Reached target network.target - Network. Nov 24 06:45:55.362918 systemd-networkd[1478]: eth0: Link UP Nov 24 06:45:55.363074 systemd-networkd[1478]: eth0: Gained carrier Nov 24 06:45:55.363088 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 06:45:55.365566 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 06:45:55.369927 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 06:45:55.379376 systemd-networkd[1478]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 24 06:45:55.402399 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 06:45:56.574017 systemd-timesyncd[1494]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 24 06:45:56.574291 systemd-timesyncd[1494]: Initial clock synchronization to Mon 2025-11-24 06:45:56.573948 UTC. Nov 24 06:45:56.574678 systemd-resolved[1393]: Clock change detected. Flushing caches. Nov 24 06:45:56.577045 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 24 06:45:56.579148 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 06:45:56.581055 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 06:45:56.584015 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 06:45:56.585991 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 06:45:56.587831 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 06:45:56.589951 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 06:45:56.589979 systemd[1]: Reached target paths.target - Path Units. Nov 24 06:45:56.591410 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 06:45:56.593138 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 06:45:56.595005 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 06:45:56.597108 systemd[1]: Reached target timers.target - Timer Units. Nov 24 06:45:56.599664 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 06:45:56.603427 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 06:45:56.607244 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 06:45:56.609653 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 06:45:56.611748 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 06:45:56.625772 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 06:45:56.629364 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 06:45:56.632116 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 06:45:56.646385 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 06:45:56.648092 systemd[1]: Reached target basic.target - Basic System. Nov 24 06:45:56.649679 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 06:45:56.649841 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 06:45:56.653041 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 06:45:56.657158 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 06:45:56.661669 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 06:45:56.668714 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 06:45:56.672217 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 06:45:56.673835 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 06:45:56.677121 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 06:45:56.684292 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 06:45:56.691167 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 06:45:56.697674 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing passwd entry cache Nov 24 06:45:56.696072 oslogin_cache_refresh[1547]: Refreshing passwd entry cache Nov 24 06:45:56.697911 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 06:45:56.699070 jq[1545]: false Nov 24 06:45:56.701062 extend-filesystems[1546]: Found /dev/vda6 Nov 24 06:45:56.713982 extend-filesystems[1546]: Found /dev/vda9 Nov 24 06:45:56.704018 oslogin_cache_refresh[1547]: Failure getting users, quitting Nov 24 06:45:56.716261 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting users, quitting Nov 24 06:45:56.716261 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 06:45:56.716261 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing group entry cache Nov 24 06:45:56.716261 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting groups, quitting Nov 24 06:45:56.716261 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 06:45:56.703035 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 06:45:56.716423 extend-filesystems[1546]: Checking size of /dev/vda9 Nov 24 06:45:56.704041 oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 06:45:56.711184 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 06:45:56.704100 oslogin_cache_refresh[1547]: Refreshing group entry cache Nov 24 06:45:56.713014 oslogin_cache_refresh[1547]: Failure getting groups, quitting Nov 24 06:45:56.713023 oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 06:45:56.722062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:45:56.724686 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 06:45:56.725237 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 06:45:56.726360 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 06:45:56.729281 extend-filesystems[1546]: Resized partition /dev/vda9 Nov 24 06:45:56.730845 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 06:45:56.734073 extend-filesystems[1572]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 06:45:56.740896 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 24 06:45:56.744217 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 06:45:56.747425 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 06:45:56.747744 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 06:45:56.748300 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 06:45:56.748551 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 06:45:56.751799 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 06:45:56.752104 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 06:45:56.752641 jq[1570]: true Nov 24 06:45:56.756438 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 06:45:56.756688 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 06:45:56.775562 update_engine[1568]: I20251124 06:45:56.768347 1568 main.cc:92] Flatcar Update Engine starting Nov 24 06:45:56.782934 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 24 06:45:56.808503 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 24 06:45:56.808503 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 24 06:45:56.808503 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 24 06:45:56.810559 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 06:45:56.811259 extend-filesystems[1546]: Resized filesystem in /dev/vda9 Nov 24 06:45:56.821039 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 06:45:56.821591 systemd-logind[1560]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 06:45:56.821610 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 06:45:56.822104 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 06:45:56.825781 systemd-logind[1560]: New seat seat0. Nov 24 06:45:56.831235 kernel: kvm_amd: TSC scaling supported Nov 24 06:45:56.831265 kernel: kvm_amd: Nested Virtualization enabled Nov 24 06:45:56.831288 kernel: kvm_amd: Nested Paging enabled Nov 24 06:45:56.831300 kernel: kvm_amd: LBR virtualization supported Nov 24 06:45:56.831312 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 24 06:45:56.831325 kernel: kvm_amd: Virtual GIF supported Nov 24 06:45:56.831337 jq[1578]: true Nov 24 06:45:56.838650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:45:56.841464 dbus-daemon[1543]: [system] SELinux support is enabled Nov 24 06:45:56.841837 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 06:45:56.848155 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 06:45:56.851751 update_engine[1568]: I20251124 06:45:56.851073 1568 update_check_scheduler.cc:74] Next update check in 5m19s Nov 24 06:45:56.862125 tar[1577]: linux-amd64/LICENSE Nov 24 06:45:56.862341 tar[1577]: linux-amd64/helm Nov 24 06:45:56.866687 dbus-daemon[1543]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 24 06:45:56.871797 systemd[1]: Started update-engine.service - Update Engine. Nov 24 06:45:56.874907 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 06:45:56.875045 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 06:45:56.877141 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 06:45:56.877253 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 06:45:56.881349 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 06:45:56.902899 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 06:45:56.909225 kernel: EDAC MC: Ver: 3.0.0 Nov 24 06:45:56.912238 bash[1613]: Updated "/home/core/.ssh/authorized_keys" Nov 24 06:45:56.917312 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 06:45:56.920144 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 24 06:45:56.934303 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 06:45:56.940387 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 06:45:56.949566 locksmithd[1614]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 06:45:56.959342 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 06:45:56.959601 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 06:45:56.963625 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 06:45:56.980828 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 06:45:56.985287 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 06:45:56.989305 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 06:45:56.991398 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 06:45:57.038354 containerd[1580]: time="2025-11-24T06:45:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 06:45:57.039624 containerd[1580]: time="2025-11-24T06:45:57.039573519Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 06:45:57.048251 containerd[1580]: time="2025-11-24T06:45:57.047974120Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.218µs" Nov 24 06:45:57.048251 containerd[1580]: time="2025-11-24T06:45:57.048021349Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 06:45:57.048251 containerd[1580]: time="2025-11-24T06:45:57.048049131Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 06:45:57.048321 containerd[1580]: time="2025-11-24T06:45:57.048287007Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 06:45:57.048321 containerd[1580]: time="2025-11-24T06:45:57.048301534Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 06:45:57.048369 containerd[1580]: time="2025-11-24T06:45:57.048328985Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 06:45:57.048416 containerd[1580]: time="2025-11-24T06:45:57.048396402Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 06:45:57.048416 containerd[1580]: time="2025-11-24T06:45:57.048411721Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 06:45:57.048697 containerd[1580]: time="2025-11-24T06:45:57.048666358Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 06:45:57.048697 containerd[1580]: time="2025-11-24T06:45:57.048684372Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 06:45:57.048697 containerd[1580]: time="2025-11-24T06:45:57.048695402Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 06:45:57.048763 containerd[1580]: time="2025-11-24T06:45:57.048705051Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 06:45:57.048815 containerd[1580]: time="2025-11-24T06:45:57.048790711Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 06:45:57.049071 containerd[1580]: time="2025-11-24T06:45:57.049042914Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 06:45:57.049096 containerd[1580]: time="2025-11-24T06:45:57.049077158Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 06:45:57.049096 containerd[1580]: time="2025-11-24T06:45:57.049087718Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 06:45:57.049143 containerd[1580]: time="2025-11-24T06:45:57.049113146Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 06:45:57.049340 containerd[1580]: time="2025-11-24T06:45:57.049312840Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 06:45:57.049404 containerd[1580]: time="2025-11-24T06:45:57.049386929Z" level=info msg="metadata content store policy set" policy=shared Nov 24 06:45:57.054456 containerd[1580]: time="2025-11-24T06:45:57.054420557Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 06:45:57.054489 containerd[1580]: time="2025-11-24T06:45:57.054462546Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 06:45:57.054489 containerd[1580]: time="2025-11-24T06:45:57.054480129Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 06:45:57.054526 containerd[1580]: time="2025-11-24T06:45:57.054495528Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 06:45:57.054569 containerd[1580]: time="2025-11-24T06:45:57.054524161Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 06:45:57.054590 containerd[1580]: time="2025-11-24T06:45:57.054566080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 06:45:57.054590 containerd[1580]: time="2025-11-24T06:45:57.054585096Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 06:45:57.054627 containerd[1580]: time="2025-11-24T06:45:57.054600174Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 06:45:57.054627 containerd[1580]: time="2025-11-24T06:45:57.054613920Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 06:45:57.054666 containerd[1580]: time="2025-11-24T06:45:57.054626012Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 06:45:57.054666 containerd[1580]: time="2025-11-24T06:45:57.054637784Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 06:45:57.054666 containerd[1580]: time="2025-11-24T06:45:57.054653063Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 06:45:57.054813 containerd[1580]: time="2025-11-24T06:45:57.054781694Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 06:45:57.054813 containerd[1580]: time="2025-11-24T06:45:57.054807743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 06:45:57.054854 containerd[1580]: time="2025-11-24T06:45:57.054821910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 06:45:57.054854 containerd[1580]: time="2025-11-24T06:45:57.054834072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 06:45:57.054854 containerd[1580]: time="2025-11-24T06:45:57.054845163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 06:45:57.054932 containerd[1580]: time="2025-11-24T06:45:57.054862325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 06:45:57.054932 containerd[1580]: time="2025-11-24T06:45:57.054893754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 06:45:57.054932 containerd[1580]: time="2025-11-24T06:45:57.054905416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 06:45:57.054932 containerd[1580]: time="2025-11-24T06:45:57.054921927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 06:45:57.054932 containerd[1580]: time="2025-11-24T06:45:57.054933168Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 06:45:57.055028 containerd[1580]: time="2025-11-24T06:45:57.054944860Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 06:45:57.055028 containerd[1580]: time="2025-11-24T06:45:57.054988612Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 06:45:57.055028 containerd[1580]: time="2025-11-24T06:45:57.055000775Z" level=info msg="Start snapshots syncer" Nov 24 06:45:57.055081 containerd[1580]: time="2025-11-24T06:45:57.055039387Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 06:45:57.055314 containerd[1580]: time="2025-11-24T06:45:57.055271102Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 06:45:57.055424 containerd[1580]: time="2025-11-24T06:45:57.055316096Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 06:45:57.055424 containerd[1580]: time="2025-11-24T06:45:57.055370348Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 06:45:57.055483 containerd[1580]: time="2025-11-24T06:45:57.055465807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 06:45:57.055506 containerd[1580]: time="2025-11-24T06:45:57.055486155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 06:45:57.055506 containerd[1580]: time="2025-11-24T06:45:57.055497687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 06:45:57.055560 containerd[1580]: time="2025-11-24T06:45:57.055507415Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 06:45:57.055560 containerd[1580]: time="2025-11-24T06:45:57.055519137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 06:45:57.055560 containerd[1580]: time="2025-11-24T06:45:57.055539084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 06:45:57.055560 containerd[1580]: time="2025-11-24T06:45:57.055549835Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 06:45:57.055633 containerd[1580]: time="2025-11-24T06:45:57.055571906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 06:45:57.055633 containerd[1580]: time="2025-11-24T06:45:57.055582776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 06:45:57.055633 containerd[1580]: time="2025-11-24T06:45:57.055599518Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 06:45:57.055633 containerd[1580]: time="2025-11-24T06:45:57.055630155Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 06:45:57.055703 containerd[1580]: time="2025-11-24T06:45:57.055644522Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 06:45:57.055703 containerd[1580]: time="2025-11-24T06:45:57.055656224Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 06:45:57.055703 containerd[1580]: time="2025-11-24T06:45:57.055668056Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 06:45:57.055703 containerd[1580]: time="2025-11-24T06:45:57.055678175Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 06:45:57.055703 containerd[1580]: time="2025-11-24T06:45:57.055697962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 06:45:57.055795 containerd[1580]: time="2025-11-24T06:45:57.055718681Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 06:45:57.055795 containerd[1580]: time="2025-11-24T06:45:57.055740462Z" level=info msg="runtime interface created" Nov 24 06:45:57.055795 containerd[1580]: time="2025-11-24T06:45:57.055747826Z" level=info msg="created NRI interface" Nov 24 06:45:57.055795 containerd[1580]: time="2025-11-24T06:45:57.055757434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 06:45:57.055795 containerd[1580]: time="2025-11-24T06:45:57.055770839Z" level=info msg="Connect containerd service" Nov 24 06:45:57.055795 containerd[1580]: time="2025-11-24T06:45:57.055791257Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 06:45:57.056543 containerd[1580]: time="2025-11-24T06:45:57.056502811Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 06:45:57.133505 containerd[1580]: time="2025-11-24T06:45:57.133387540Z" level=info msg="Start subscribing containerd event" Nov 24 06:45:57.133505 containerd[1580]: time="2025-11-24T06:45:57.133462571Z" level=info msg="Start recovering state" Nov 24 06:45:57.133625 containerd[1580]: time="2025-11-24T06:45:57.133555665Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 06:45:57.133625 containerd[1580]: time="2025-11-24T06:45:57.133607883Z" level=info msg="Start event monitor" Nov 24 06:45:57.133625 containerd[1580]: time="2025-11-24T06:45:57.133621509Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 06:45:57.133694 containerd[1580]: time="2025-11-24T06:45:57.133623523Z" level=info msg="Start cni network conf syncer for default" Nov 24 06:45:57.133694 containerd[1580]: time="2025-11-24T06:45:57.133642879Z" level=info msg="Start streaming server" Nov 24 06:45:57.133694 containerd[1580]: time="2025-11-24T06:45:57.133654741Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 06:45:57.133694 containerd[1580]: time="2025-11-24T06:45:57.133662315Z" level=info msg="runtime interface starting up..." Nov 24 06:45:57.133694 containerd[1580]: time="2025-11-24T06:45:57.133668447Z" level=info msg="starting plugins..." Nov 24 06:45:57.133694 containerd[1580]: time="2025-11-24T06:45:57.133683465Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 06:45:57.134091 containerd[1580]: time="2025-11-24T06:45:57.133854606Z" level=info msg="containerd successfully booted in 0.096025s" Nov 24 06:45:57.134023 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 06:45:57.164652 tar[1577]: linux-amd64/README.md Nov 24 06:45:57.189501 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 06:45:57.798102 systemd-networkd[1478]: eth0: Gained IPv6LL Nov 24 06:45:57.801100 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 06:45:57.803792 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 06:45:57.807054 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 24 06:45:57.810307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:45:57.812375 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 06:45:57.851114 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 06:45:57.853459 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 24 06:45:57.853725 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 24 06:45:57.856425 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 06:45:58.557370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:45:58.559954 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 06:45:58.562427 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 06:45:58.563858 systemd[1]: Startup finished in 2.868s (kernel) + 5.621s (initrd) + 4.177s (userspace) = 12.667s. Nov 24 06:45:58.993063 kubelet[1685]: E1124 06:45:58.992944 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 06:45:58.997153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 06:45:58.997360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 06:45:58.997767 systemd[1]: kubelet.service: Consumed 988ms CPU time, 265.4M memory peak. Nov 24 06:46:02.583010 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 06:46:02.584141 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:52152.service - OpenSSH per-connection server daemon (10.0.0.1:52152). Nov 24 06:46:02.657769 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 52152 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:46:02.659649 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:02.666202 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 06:46:02.667351 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 06:46:02.675046 systemd-logind[1560]: New session 1 of user core. Nov 24 06:46:02.689871 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 06:46:02.692679 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 06:46:02.713095 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 06:46:02.715483 systemd-logind[1560]: New session c1 of user core. Nov 24 06:46:02.881868 systemd[1703]: Queued start job for default target default.target. Nov 24 06:46:02.900481 systemd[1703]: Created slice app.slice - User Application Slice. Nov 24 06:46:02.900515 systemd[1703]: Reached target paths.target - Paths. Nov 24 06:46:02.900569 systemd[1703]: Reached target timers.target - Timers. Nov 24 06:46:02.902290 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 06:46:02.914086 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 06:46:02.914224 systemd[1703]: Reached target sockets.target - Sockets. Nov 24 06:46:02.914277 systemd[1703]: Reached target basic.target - Basic System. Nov 24 06:46:02.914330 systemd[1703]: Reached target default.target - Main User Target. Nov 24 06:46:02.914383 systemd[1703]: Startup finished in 193ms. Nov 24 06:46:02.914531 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 06:46:02.916111 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 06:46:02.982415 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:52164.service - OpenSSH per-connection server daemon (10.0.0.1:52164). Nov 24 06:46:03.044580 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 52164 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:46:03.046663 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:03.051320 systemd-logind[1560]: New session 2 of user core. Nov 24 06:46:03.065080 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 06:46:03.118952 sshd[1717]: Connection closed by 10.0.0.1 port 52164 Nov 24 06:46:03.119304 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:03.135508 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:52164.service: Deactivated successfully. Nov 24 06:46:03.137318 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 06:46:03.137970 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Nov 24 06:46:03.140332 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:52178.service - OpenSSH per-connection server daemon (10.0.0.1:52178). Nov 24 06:46:03.140913 systemd-logind[1560]: Removed session 2. Nov 24 06:46:03.190866 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 52178 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:46:03.192525 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:03.197255 systemd-logind[1560]: New session 3 of user core. Nov 24 06:46:03.206991 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 06:46:03.256126 sshd[1726]: Connection closed by 10.0.0.1 port 52178 Nov 24 06:46:03.256514 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:03.265535 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:52178.service: Deactivated successfully. Nov 24 06:46:03.267404 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 06:46:03.268215 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Nov 24 06:46:03.270978 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:52188.service - OpenSSH per-connection server daemon (10.0.0.1:52188). Nov 24 06:46:03.271545 systemd-logind[1560]: Removed session 3. Nov 24 06:46:03.323933 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 52188 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:46:03.325300 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:03.329828 systemd-logind[1560]: New session 4 of user core. Nov 24 06:46:03.344030 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 06:46:03.398135 sshd[1736]: Connection closed by 10.0.0.1 port 52188 Nov 24 06:46:03.398794 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:03.407232 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:52188.service: Deactivated successfully. Nov 24 06:46:03.408823 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 06:46:03.409503 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Nov 24 06:46:03.412043 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:52202.service - OpenSSH per-connection server daemon (10.0.0.1:52202). Nov 24 06:46:03.412547 systemd-logind[1560]: Removed session 4. Nov 24 06:46:03.471980 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 52202 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:46:03.473546 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:03.477913 systemd-logind[1560]: New session 5 of user core. Nov 24 06:46:03.488047 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 06:46:03.546750 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 06:46:03.547147 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:46:03.561489 sudo[1746]: pam_unix(sudo:session): session closed for user root Nov 24 06:46:03.563163 sshd[1745]: Connection closed by 10.0.0.1 port 52202 Nov 24 06:46:03.563506 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:03.575651 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:52202.service: Deactivated successfully. Nov 24 06:46:03.577493 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 06:46:03.578268 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Nov 24 06:46:03.581277 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:52218.service - OpenSSH per-connection server daemon (10.0.0.1:52218). Nov 24 06:46:03.582107 systemd-logind[1560]: Removed session 5. Nov 24 06:46:03.642092 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 52218 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:46:03.643385 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:03.647455 systemd-logind[1560]: New session 6 of user core. Nov 24 06:46:03.656996 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 06:46:03.711528 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 06:46:03.711853 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:46:03.944474 sudo[1757]: pam_unix(sudo:session): session closed for user root Nov 24 06:46:03.951065 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 06:46:03.951386 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:46:03.961235 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 06:46:04.009515 augenrules[1779]: No rules Nov 24 06:46:04.011215 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 06:46:04.011528 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 06:46:04.012780 sudo[1756]: pam_unix(sudo:session): session closed for user root Nov 24 06:46:04.014511 sshd[1755]: Connection closed by 10.0.0.1 port 52218 Nov 24 06:46:04.014823 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:04.028019 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:52218.service: Deactivated successfully. Nov 24 06:46:04.029979 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 06:46:04.030811 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Nov 24 06:46:04.033953 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:52230.service - OpenSSH per-connection server daemon (10.0.0.1:52230). Nov 24 06:46:04.034622 systemd-logind[1560]: Removed session 6. Nov 24 06:46:04.094694 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 52230 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:46:04.096297 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:04.100798 systemd-logind[1560]: New session 7 of user core. Nov 24 06:46:04.111029 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 06:46:04.163657 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 06:46:04.163970 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:46:04.470555 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 06:46:04.487213 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 06:46:04.714177 dockerd[1812]: time="2025-11-24T06:46:04.714103601Z" level=info msg="Starting up" Nov 24 06:46:04.715063 dockerd[1812]: time="2025-11-24T06:46:04.715020781Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 06:46:04.727185 dockerd[1812]: time="2025-11-24T06:46:04.727088738Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 06:46:04.787246 dockerd[1812]: time="2025-11-24T06:46:04.787189577Z" level=info msg="Loading containers: start." Nov 24 06:46:04.797914 kernel: Initializing XFRM netlink socket Nov 24 06:46:05.058958 systemd-networkd[1478]: docker0: Link UP Nov 24 06:46:05.063968 dockerd[1812]: time="2025-11-24T06:46:05.063933261Z" level=info msg="Loading containers: done." Nov 24 06:46:05.077436 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1765616491-merged.mount: Deactivated successfully. Nov 24 06:46:05.078814 dockerd[1812]: time="2025-11-24T06:46:05.078772425Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 06:46:05.078909 dockerd[1812]: time="2025-11-24T06:46:05.078853547Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 06:46:05.078967 dockerd[1812]: time="2025-11-24T06:46:05.078949767Z" level=info msg="Initializing buildkit" Nov 24 06:46:05.108838 dockerd[1812]: time="2025-11-24T06:46:05.108801790Z" level=info msg="Completed buildkit initialization" Nov 24 06:46:05.112911 dockerd[1812]: time="2025-11-24T06:46:05.112887000Z" level=info msg="Daemon has completed initialization" Nov 24 06:46:05.113027 dockerd[1812]: time="2025-11-24T06:46:05.112959045Z" level=info msg="API listen on /run/docker.sock" Nov 24 06:46:05.113073 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 06:46:05.827949 containerd[1580]: time="2025-11-24T06:46:05.827909877Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 24 06:46:06.498938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947503760.mount: Deactivated successfully. Nov 24 06:46:07.467926 containerd[1580]: time="2025-11-24T06:46:07.467857729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:07.468642 containerd[1580]: time="2025-11-24T06:46:07.468580755Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=30113213" Nov 24 06:46:07.469837 containerd[1580]: time="2025-11-24T06:46:07.469787818Z" level=info msg="ImageCreate event name:\"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:07.472309 containerd[1580]: time="2025-11-24T06:46:07.472258821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:07.473269 containerd[1580]: time="2025-11-24T06:46:07.473221496Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"30109812\" in 1.645272686s" Nov 24 06:46:07.473269 containerd[1580]: time="2025-11-24T06:46:07.473266330Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\"" Nov 24 06:46:07.473760 containerd[1580]: time="2025-11-24T06:46:07.473736051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 24 06:46:08.699457 containerd[1580]: time="2025-11-24T06:46:08.699389391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:08.700273 containerd[1580]: time="2025-11-24T06:46:08.700209338Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=26018107" Nov 24 06:46:08.701438 containerd[1580]: time="2025-11-24T06:46:08.701406583Z" level=info msg="ImageCreate event name:\"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:08.703833 containerd[1580]: time="2025-11-24T06:46:08.703785313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:08.704726 containerd[1580]: time="2025-11-24T06:46:08.704681604Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"27675143\" in 1.230919984s" Nov 24 06:46:08.704726 containerd[1580]: time="2025-11-24T06:46:08.704721518Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\"" Nov 24 06:46:08.705184 containerd[1580]: time="2025-11-24T06:46:08.705160542Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 24 06:46:09.247740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 06:46:09.249373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:09.489972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:09.495431 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 06:46:09.544144 kubelet[2098]: E1124 06:46:09.543964 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 06:46:09.550743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 06:46:09.550953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 06:46:09.551317 systemd[1]: kubelet.service: Consumed 236ms CPU time, 111.6M memory peak. Nov 24 06:46:10.573118 containerd[1580]: time="2025-11-24T06:46:10.573058991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:10.574031 containerd[1580]: time="2025-11-24T06:46:10.573984135Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=20156482" Nov 24 06:46:10.575367 containerd[1580]: time="2025-11-24T06:46:10.575328746Z" level=info msg="ImageCreate event name:\"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:10.577827 containerd[1580]: time="2025-11-24T06:46:10.577798167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:10.578669 containerd[1580]: time="2025-11-24T06:46:10.578632200Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"21813536\" in 1.873443466s" Nov 24 06:46:10.578669 containerd[1580]: time="2025-11-24T06:46:10.578663780Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\"" Nov 24 06:46:10.579276 containerd[1580]: time="2025-11-24T06:46:10.579236223Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 24 06:46:11.673585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988880707.mount: Deactivated successfully. Nov 24 06:46:12.335776 containerd[1580]: time="2025-11-24T06:46:12.335716673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:12.336708 containerd[1580]: time="2025-11-24T06:46:12.336681573Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=31929138" Nov 24 06:46:12.337919 containerd[1580]: time="2025-11-24T06:46:12.337895919Z" level=info msg="ImageCreate event name:\"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:12.340192 containerd[1580]: time="2025-11-24T06:46:12.340142351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:12.340591 containerd[1580]: time="2025-11-24T06:46:12.340562519Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"31928157\" in 1.761297692s" Nov 24 06:46:12.340624 containerd[1580]: time="2025-11-24T06:46:12.340590973Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\"" Nov 24 06:46:12.341114 containerd[1580]: time="2025-11-24T06:46:12.340960035Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 24 06:46:13.024476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146849348.mount: Deactivated successfully. Nov 24 06:46:13.770817 containerd[1580]: time="2025-11-24T06:46:13.770749409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:13.771478 containerd[1580]: time="2025-11-24T06:46:13.771434714Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 24 06:46:13.772605 containerd[1580]: time="2025-11-24T06:46:13.772571796Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:13.775164 containerd[1580]: time="2025-11-24T06:46:13.775131766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:13.776134 containerd[1580]: time="2025-11-24T06:46:13.776109059Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.435125199s" Nov 24 06:46:13.776182 containerd[1580]: time="2025-11-24T06:46:13.776136550Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 24 06:46:13.776561 containerd[1580]: time="2025-11-24T06:46:13.776539085Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 06:46:14.358807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895385021.mount: Deactivated successfully. Nov 24 06:46:14.365673 containerd[1580]: time="2025-11-24T06:46:14.365617928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:46:14.366467 containerd[1580]: time="2025-11-24T06:46:14.366430251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 06:46:14.367650 containerd[1580]: time="2025-11-24T06:46:14.367611716Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:46:14.369624 containerd[1580]: time="2025-11-24T06:46:14.369563045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:46:14.370166 containerd[1580]: time="2025-11-24T06:46:14.370117574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 593.552271ms" Nov 24 06:46:14.370166 containerd[1580]: time="2025-11-24T06:46:14.370146178Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 06:46:14.370640 containerd[1580]: time="2025-11-24T06:46:14.370590221Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 24 06:46:14.993986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19582254.mount: Deactivated successfully. Nov 24 06:46:16.720757 containerd[1580]: time="2025-11-24T06:46:16.720681421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:16.721587 containerd[1580]: time="2025-11-24T06:46:16.721527517Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Nov 24 06:46:16.723096 containerd[1580]: time="2025-11-24T06:46:16.723048720Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:16.725847 containerd[1580]: time="2025-11-24T06:46:16.725795510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:16.726754 containerd[1580]: time="2025-11-24T06:46:16.726725193Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.356112721s" Nov 24 06:46:16.726790 containerd[1580]: time="2025-11-24T06:46:16.726752494Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 24 06:46:19.747832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 24 06:46:19.749631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:19.954038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:19.958667 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 06:46:19.994449 kubelet[2262]: E1124 06:46:19.994387 2262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 06:46:19.998630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 06:46:19.998908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 06:46:19.999322 systemd[1]: kubelet.service: Consumed 198ms CPU time, 112.2M memory peak. Nov 24 06:46:20.210862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:20.211075 systemd[1]: kubelet.service: Consumed 198ms CPU time, 112.2M memory peak. Nov 24 06:46:20.213102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:20.236736 systemd[1]: Reload requested from client PID 2278 ('systemctl') (unit session-7.scope)... Nov 24 06:46:20.236754 systemd[1]: Reloading... Nov 24 06:46:20.325923 zram_generator::config[2324]: No configuration found. Nov 24 06:46:20.892937 systemd[1]: Reloading finished in 655 ms. Nov 24 06:46:20.959870 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 06:46:20.959988 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 06:46:20.960288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:20.960336 systemd[1]: kubelet.service: Consumed 159ms CPU time, 98.2M memory peak. Nov 24 06:46:20.961935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:21.141589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:21.146004 (kubelet)[2369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 06:46:21.185508 kubelet[2369]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 06:46:21.185508 kubelet[2369]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 06:46:21.185508 kubelet[2369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 06:46:21.185915 kubelet[2369]: I1124 06:46:21.185555 2369 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 06:46:22.201242 kubelet[2369]: I1124 06:46:22.201191 2369 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 06:46:22.201242 kubelet[2369]: I1124 06:46:22.201221 2369 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 06:46:22.201758 kubelet[2369]: I1124 06:46:22.201443 2369 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 06:46:22.229231 kubelet[2369]: E1124 06:46:22.229155 2369 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 06:46:22.229985 kubelet[2369]: I1124 06:46:22.229929 2369 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 06:46:22.239755 kubelet[2369]: I1124 06:46:22.239685 2369 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 06:46:22.246083 kubelet[2369]: I1124 06:46:22.246054 2369 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 06:46:22.246313 kubelet[2369]: I1124 06:46:22.246262 2369 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 06:46:22.246460 kubelet[2369]: I1124 06:46:22.246289 2369 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 06:46:22.246460 kubelet[2369]: I1124 06:46:22.246454 2369 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 06:46:22.246460 kubelet[2369]: I1124 06:46:22.246463 2369 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 06:46:22.247257 kubelet[2369]: I1124 06:46:22.247219 2369 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:46:22.249265 kubelet[2369]: I1124 06:46:22.249225 2369 kubelet.go:480] "Attempting to sync node with API server" Nov 24 06:46:22.249323 kubelet[2369]: I1124 06:46:22.249272 2369 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 06:46:22.249323 kubelet[2369]: I1124 06:46:22.249304 2369 kubelet.go:386] "Adding apiserver pod source" Nov 24 06:46:22.250736 kubelet[2369]: I1124 06:46:22.250696 2369 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 06:46:22.255537 kubelet[2369]: I1124 06:46:22.255501 2369 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 06:46:22.255966 kubelet[2369]: I1124 06:46:22.255942 2369 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 06:46:22.256748 kubelet[2369]: W1124 06:46:22.256715 2369 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 06:46:22.260478 kubelet[2369]: E1124 06:46:22.259318 2369 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 06:46:22.260478 kubelet[2369]: I1124 06:46:22.259380 2369 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 06:46:22.260478 kubelet[2369]: E1124 06:46:22.259398 2369 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 06:46:22.260478 kubelet[2369]: I1124 06:46:22.259418 2369 server.go:1289] "Started kubelet" Nov 24 06:46:22.261446 kubelet[2369]: I1124 06:46:22.260838 2369 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 06:46:22.261446 kubelet[2369]: I1124 06:46:22.261085 2369 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 06:46:22.261446 kubelet[2369]: I1124 06:46:22.261110 2369 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 06:46:22.261446 kubelet[2369]: I1124 06:46:22.261154 2369 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 06:46:22.262110 kubelet[2369]: I1124 06:46:22.261902 2369 server.go:317] "Adding debug handlers to kubelet server" Nov 24 06:46:22.262657 kubelet[2369]: I1124 06:46:22.262632 2369 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 06:46:22.262699 kubelet[2369]: I1124 06:46:22.262691 2369 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 06:46:22.262741 kubelet[2369]: I1124 06:46:22.262728 2369 reconciler.go:26] "Reconciler: start to sync state" Nov 24 06:46:22.265245 kubelet[2369]: E1124 06:46:22.265001 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 06:46:22.267004 kubelet[2369]: E1124 06:46:22.265966 2369 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ade714c28f4a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-24 06:46:22.259393705 +0000 UTC m=+1.109068294,LastTimestamp:2025-11-24 06:46:22.259393705 +0000 UTC m=+1.109068294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 24 06:46:22.267222 kubelet[2369]: E1124 06:46:22.267195 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" Nov 24 06:46:22.267286 kubelet[2369]: I1124 06:46:22.267213 2369 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 06:46:22.267517 kubelet[2369]: I1124 06:46:22.267494 2369 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 06:46:22.267699 kubelet[2369]: E1124 06:46:22.267309 2369 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 06:46:22.268727 kubelet[2369]: I1124 06:46:22.268708 2369 factory.go:223] Registration of the containerd container factory successfully Nov 24 06:46:22.268727 kubelet[2369]: I1124 06:46:22.268723 2369 factory.go:223] Registration of the systemd container factory successfully Nov 24 06:46:22.271949 kubelet[2369]: E1124 06:46:22.271920 2369 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 06:46:22.278738 kubelet[2369]: I1124 06:46:22.278543 2369 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 06:46:22.278738 kubelet[2369]: I1124 06:46:22.278557 2369 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 06:46:22.278738 kubelet[2369]: I1124 06:46:22.278573 2369 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:46:22.365453 kubelet[2369]: E1124 06:46:22.365423 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 06:46:22.466011 kubelet[2369]: E1124 06:46:22.465853 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 06:46:22.468426 kubelet[2369]: E1124 06:46:22.468396 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" Nov 24 06:46:22.566946 kubelet[2369]: E1124 06:46:22.566859 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 06:46:22.575681 kubelet[2369]: I1124 06:46:22.575624 2369 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 06:46:22.576982 kubelet[2369]: I1124 06:46:22.576953 2369 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 06:46:22.577029 kubelet[2369]: I1124 06:46:22.576985 2369 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 06:46:22.577029 kubelet[2369]: I1124 06:46:22.577009 2369 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 06:46:22.577029 kubelet[2369]: I1124 06:46:22.577016 2369 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 06:46:22.577234 kubelet[2369]: E1124 06:46:22.577054 2369 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 06:46:22.577852 kubelet[2369]: E1124 06:46:22.577813 2369 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 06:46:22.609964 kubelet[2369]: I1124 06:46:22.609894 2369 policy_none.go:49] "None policy: Start" Nov 24 06:46:22.609964 kubelet[2369]: I1124 06:46:22.609935 2369 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 06:46:22.609964 kubelet[2369]: I1124 06:46:22.609950 2369 state_mem.go:35] "Initializing new in-memory state store" Nov 24 06:46:22.616469 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 06:46:22.630772 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 06:46:22.633762 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 06:46:22.655656 kubelet[2369]: E1124 06:46:22.655633 2369 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 06:46:22.655855 kubelet[2369]: I1124 06:46:22.655824 2369 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 06:46:22.655914 kubelet[2369]: I1124 06:46:22.655840 2369 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 06:46:22.656076 kubelet[2369]: I1124 06:46:22.656057 2369 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 06:46:22.656795 kubelet[2369]: E1124 06:46:22.656778 2369 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 06:46:22.656848 kubelet[2369]: E1124 06:46:22.656817 2369 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 24 06:46:22.687361 systemd[1]: Created slice kubepods-burstable-podfdad7c9b4e38c7df31db7385b9eb45d5.slice - libcontainer container kubepods-burstable-podfdad7c9b4e38c7df31db7385b9eb45d5.slice. Nov 24 06:46:22.707251 kubelet[2369]: E1124 06:46:22.707204 2369 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:22.710356 systemd[1]: Created slice kubepods-burstable-pod1d5832191310254249cf17c2353d71ec.slice - libcontainer container kubepods-burstable-pod1d5832191310254249cf17c2353d71ec.slice. Nov 24 06:46:22.711931 kubelet[2369]: E1124 06:46:22.711908 2369 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:22.713535 systemd[1]: Created slice kubepods-burstable-pode51b49401d7e125d16957469facd7352.slice - libcontainer container kubepods-burstable-pode51b49401d7e125d16957469facd7352.slice. Nov 24 06:46:22.714978 kubelet[2369]: E1124 06:46:22.714964 2369 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:22.757163 kubelet[2369]: I1124 06:46:22.757142 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 06:46:22.757487 kubelet[2369]: E1124 06:46:22.757464 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Nov 24 06:46:22.766000 kubelet[2369]: I1124 06:46:22.765973 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fdad7c9b4e38c7df31db7385b9eb45d5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fdad7c9b4e38c7df31db7385b9eb45d5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:22.766000 kubelet[2369]: I1124 06:46:22.765994 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fdad7c9b4e38c7df31db7385b9eb45d5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fdad7c9b4e38c7df31db7385b9eb45d5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:22.766080 kubelet[2369]: I1124 06:46:22.766009 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:22.766080 kubelet[2369]: I1124 06:46:22.766022 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:22.766080 kubelet[2369]: I1124 06:46:22.766037 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:22.766080 kubelet[2369]: I1124 06:46:22.766050 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:22.766080 kubelet[2369]: I1124 06:46:22.766063 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e51b49401d7e125d16957469facd7352-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e51b49401d7e125d16957469facd7352\") " pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:22.766232 kubelet[2369]: I1124 06:46:22.766100 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fdad7c9b4e38c7df31db7385b9eb45d5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fdad7c9b4e38c7df31db7385b9eb45d5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:22.766232 kubelet[2369]: I1124 06:46:22.766136 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:22.868816 kubelet[2369]: E1124 06:46:22.868751 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" Nov 24 06:46:22.932212 kubelet[2369]: E1124 06:46:22.932111 2369 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ade714c28f4a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-24 06:46:22.259393705 +0000 UTC m=+1.109068294,LastTimestamp:2025-11-24 06:46:22.259393705 +0000 UTC m=+1.109068294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 24 06:46:22.959187 kubelet[2369]: I1124 06:46:22.959145 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 06:46:22.959383 kubelet[2369]: E1124 06:46:22.959353 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Nov 24 06:46:23.008666 containerd[1580]: time="2025-11-24T06:46:23.008573196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fdad7c9b4e38c7df31db7385b9eb45d5,Namespace:kube-system,Attempt:0,}" Nov 24 06:46:23.013085 containerd[1580]: time="2025-11-24T06:46:23.013054198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1d5832191310254249cf17c2353d71ec,Namespace:kube-system,Attempt:0,}" Nov 24 06:46:23.015457 containerd[1580]: time="2025-11-24T06:46:23.015433009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e51b49401d7e125d16957469facd7352,Namespace:kube-system,Attempt:0,}" Nov 24 06:46:23.041486 containerd[1580]: time="2025-11-24T06:46:23.041425535Z" level=info msg="connecting to shim a054b5a0c9162cee14cbf26f90f1ad04a5ef232bb91d3dbb88a07fdc20cb88e4" address="unix:///run/containerd/s/e0588b7ea5a7d7f2cc59c13310f37118566fa03428217f62b571d500dbf0bfad" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:23.048418 containerd[1580]: time="2025-11-24T06:46:23.048350289Z" level=info msg="connecting to shim 8b513d38d3269d80d92fc8285963be9f6d01a329bd8c699035b82652c16bfbfd" address="unix:///run/containerd/s/86105be808a6e2e6840106c82ae18fd6739fc1bbd75701e49b0a64b7e71f7699" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:23.057052 containerd[1580]: time="2025-11-24T06:46:23.057001319Z" level=info msg="connecting to shim 8b3e5229a5955f5085f9151b6d0a5b5b786dfa8bbfe82bf82f2fdd75854da9df" address="unix:///run/containerd/s/97eed28ef4c2676a9969975d1fac2bc0706e56e0f4a0fbee5eaca19a5cca1747" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:23.075080 systemd[1]: Started cri-containerd-a054b5a0c9162cee14cbf26f90f1ad04a5ef232bb91d3dbb88a07fdc20cb88e4.scope - libcontainer container a054b5a0c9162cee14cbf26f90f1ad04a5ef232bb91d3dbb88a07fdc20cb88e4. Nov 24 06:46:23.081706 systemd[1]: Started cri-containerd-8b513d38d3269d80d92fc8285963be9f6d01a329bd8c699035b82652c16bfbfd.scope - libcontainer container 8b513d38d3269d80d92fc8285963be9f6d01a329bd8c699035b82652c16bfbfd. Nov 24 06:46:23.086086 systemd[1]: Started cri-containerd-8b3e5229a5955f5085f9151b6d0a5b5b786dfa8bbfe82bf82f2fdd75854da9df.scope - libcontainer container 8b3e5229a5955f5085f9151b6d0a5b5b786dfa8bbfe82bf82f2fdd75854da9df. Nov 24 06:46:23.135694 containerd[1580]: time="2025-11-24T06:46:23.135623496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fdad7c9b4e38c7df31db7385b9eb45d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a054b5a0c9162cee14cbf26f90f1ad04a5ef232bb91d3dbb88a07fdc20cb88e4\"" Nov 24 06:46:23.142801 containerd[1580]: time="2025-11-24T06:46:23.142734098Z" level=info msg="CreateContainer within sandbox \"a054b5a0c9162cee14cbf26f90f1ad04a5ef232bb91d3dbb88a07fdc20cb88e4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 06:46:23.144225 containerd[1580]: time="2025-11-24T06:46:23.144170602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1d5832191310254249cf17c2353d71ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b513d38d3269d80d92fc8285963be9f6d01a329bd8c699035b82652c16bfbfd\"" Nov 24 06:46:23.149407 containerd[1580]: time="2025-11-24T06:46:23.149382033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e51b49401d7e125d16957469facd7352,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b3e5229a5955f5085f9151b6d0a5b5b786dfa8bbfe82bf82f2fdd75854da9df\"" Nov 24 06:46:23.150623 containerd[1580]: time="2025-11-24T06:46:23.150585650Z" level=info msg="CreateContainer within sandbox \"8b513d38d3269d80d92fc8285963be9f6d01a329bd8c699035b82652c16bfbfd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 06:46:23.154487 containerd[1580]: time="2025-11-24T06:46:23.154460826Z" level=info msg="CreateContainer within sandbox \"8b3e5229a5955f5085f9151b6d0a5b5b786dfa8bbfe82bf82f2fdd75854da9df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 06:46:23.158168 containerd[1580]: time="2025-11-24T06:46:23.158143811Z" level=info msg="Container 652885cb621d648122608f97d68652a0543351bd5465c3b1140d1c4b69668903: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:23.165635 containerd[1580]: time="2025-11-24T06:46:23.165598319Z" level=info msg="CreateContainer within sandbox \"a054b5a0c9162cee14cbf26f90f1ad04a5ef232bb91d3dbb88a07fdc20cb88e4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"652885cb621d648122608f97d68652a0543351bd5465c3b1140d1c4b69668903\"" Nov 24 06:46:23.166039 containerd[1580]: time="2025-11-24T06:46:23.166006794Z" level=info msg="Container 0ed04518985940e3d9cc5be6a690168c028d874b27c81519e306e660c13fddfd: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:23.166465 containerd[1580]: time="2025-11-24T06:46:23.166442511Z" level=info msg="StartContainer for \"652885cb621d648122608f97d68652a0543351bd5465c3b1140d1c4b69668903\"" Nov 24 06:46:23.168902 containerd[1580]: time="2025-11-24T06:46:23.168594326Z" level=info msg="connecting to shim 652885cb621d648122608f97d68652a0543351bd5465c3b1140d1c4b69668903" address="unix:///run/containerd/s/e0588b7ea5a7d7f2cc59c13310f37118566fa03428217f62b571d500dbf0bfad" protocol=ttrpc version=3 Nov 24 06:46:23.175557 containerd[1580]: time="2025-11-24T06:46:23.175506056Z" level=info msg="Container 79d30634e1cbfa752b07720f0b9584f33665a07fbd0b8c983b6ee0813d0cbb9a: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:23.178389 containerd[1580]: time="2025-11-24T06:46:23.178349677Z" level=info msg="CreateContainer within sandbox \"8b513d38d3269d80d92fc8285963be9f6d01a329bd8c699035b82652c16bfbfd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0ed04518985940e3d9cc5be6a690168c028d874b27c81519e306e660c13fddfd\"" Nov 24 06:46:23.180085 containerd[1580]: time="2025-11-24T06:46:23.180013076Z" level=info msg="StartContainer for \"0ed04518985940e3d9cc5be6a690168c028d874b27c81519e306e660c13fddfd\"" Nov 24 06:46:23.181098 containerd[1580]: time="2025-11-24T06:46:23.181075408Z" level=info msg="connecting to shim 0ed04518985940e3d9cc5be6a690168c028d874b27c81519e306e660c13fddfd" address="unix:///run/containerd/s/86105be808a6e2e6840106c82ae18fd6739fc1bbd75701e49b0a64b7e71f7699" protocol=ttrpc version=3 Nov 24 06:46:23.182949 containerd[1580]: time="2025-11-24T06:46:23.182927841Z" level=info msg="CreateContainer within sandbox \"8b3e5229a5955f5085f9151b6d0a5b5b786dfa8bbfe82bf82f2fdd75854da9df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"79d30634e1cbfa752b07720f0b9584f33665a07fbd0b8c983b6ee0813d0cbb9a\"" Nov 24 06:46:23.183338 containerd[1580]: time="2025-11-24T06:46:23.183320618Z" level=info msg="StartContainer for \"79d30634e1cbfa752b07720f0b9584f33665a07fbd0b8c983b6ee0813d0cbb9a\"" Nov 24 06:46:23.184444 containerd[1580]: time="2025-11-24T06:46:23.184397357Z" level=info msg="connecting to shim 79d30634e1cbfa752b07720f0b9584f33665a07fbd0b8c983b6ee0813d0cbb9a" address="unix:///run/containerd/s/97eed28ef4c2676a9969975d1fac2bc0706e56e0f4a0fbee5eaca19a5cca1747" protocol=ttrpc version=3 Nov 24 06:46:23.190310 systemd[1]: Started cri-containerd-652885cb621d648122608f97d68652a0543351bd5465c3b1140d1c4b69668903.scope - libcontainer container 652885cb621d648122608f97d68652a0543351bd5465c3b1140d1c4b69668903. Nov 24 06:46:23.207010 systemd[1]: Started cri-containerd-0ed04518985940e3d9cc5be6a690168c028d874b27c81519e306e660c13fddfd.scope - libcontainer container 0ed04518985940e3d9cc5be6a690168c028d874b27c81519e306e660c13fddfd. Nov 24 06:46:23.211648 systemd[1]: Started cri-containerd-79d30634e1cbfa752b07720f0b9584f33665a07fbd0b8c983b6ee0813d0cbb9a.scope - libcontainer container 79d30634e1cbfa752b07720f0b9584f33665a07fbd0b8c983b6ee0813d0cbb9a. Nov 24 06:46:23.230582 kubelet[2369]: E1124 06:46:23.230534 2369 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 06:46:23.262367 containerd[1580]: time="2025-11-24T06:46:23.262244690Z" level=info msg="StartContainer for \"652885cb621d648122608f97d68652a0543351bd5465c3b1140d1c4b69668903\" returns successfully" Nov 24 06:46:23.276079 containerd[1580]: time="2025-11-24T06:46:23.275652490Z" level=info msg="StartContainer for \"79d30634e1cbfa752b07720f0b9584f33665a07fbd0b8c983b6ee0813d0cbb9a\" returns successfully" Nov 24 06:46:23.283605 containerd[1580]: time="2025-11-24T06:46:23.283410446Z" level=info msg="StartContainer for \"0ed04518985940e3d9cc5be6a690168c028d874b27c81519e306e660c13fddfd\" returns successfully" Nov 24 06:46:23.361288 kubelet[2369]: I1124 06:46:23.361247 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 06:46:23.585584 kubelet[2369]: E1124 06:46:23.585471 2369 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:23.586740 kubelet[2369]: E1124 06:46:23.586720 2369 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:23.589681 kubelet[2369]: E1124 06:46:23.589661 2369 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:24.388761 kubelet[2369]: E1124 06:46:24.388701 2369 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 24 06:46:24.482042 kubelet[2369]: I1124 06:46:24.482005 2369 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 24 06:46:24.566354 kubelet[2369]: I1124 06:46:24.566311 2369 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:24.571328 kubelet[2369]: E1124 06:46:24.571286 2369 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:24.571328 kubelet[2369]: I1124 06:46:24.571304 2369 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:24.572480 kubelet[2369]: E1124 06:46:24.572460 2369 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:24.572480 kubelet[2369]: I1124 06:46:24.572475 2369 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:24.574024 kubelet[2369]: E1124 06:46:24.574003 2369 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:24.590506 kubelet[2369]: I1124 06:46:24.590471 2369 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:24.590742 kubelet[2369]: I1124 06:46:24.590717 2369 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:24.591769 kubelet[2369]: E1124 06:46:24.591739 2369 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:24.592276 kubelet[2369]: E1124 06:46:24.592257 2369 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:25.256363 kubelet[2369]: I1124 06:46:25.256310 2369 apiserver.go:52] "Watching apiserver" Nov 24 06:46:25.263722 kubelet[2369]: I1124 06:46:25.263683 2369 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 06:46:26.590424 systemd[1]: Reload requested from client PID 2657 ('systemctl') (unit session-7.scope)... Nov 24 06:46:26.590441 systemd[1]: Reloading... Nov 24 06:46:26.680917 zram_generator::config[2703]: No configuration found. Nov 24 06:46:26.902434 systemd[1]: Reloading finished in 311 ms. Nov 24 06:46:26.935385 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:26.955303 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 06:46:26.955564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:26.955603 systemd[1]: kubelet.service: Consumed 1.469s CPU time, 132.6M memory peak. Nov 24 06:46:26.957936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:27.169458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:27.183245 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 06:46:27.223557 kubelet[2745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 06:46:27.223557 kubelet[2745]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 06:46:27.223557 kubelet[2745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 06:46:27.223974 kubelet[2745]: I1124 06:46:27.223600 2745 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 06:46:27.229462 kubelet[2745]: I1124 06:46:27.229425 2745 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 06:46:27.229462 kubelet[2745]: I1124 06:46:27.229452 2745 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 06:46:27.229689 kubelet[2745]: I1124 06:46:27.229659 2745 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 06:46:27.230736 kubelet[2745]: I1124 06:46:27.230712 2745 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 06:46:27.233058 kubelet[2745]: I1124 06:46:27.233014 2745 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 06:46:27.237998 kubelet[2745]: I1124 06:46:27.237980 2745 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 06:46:27.242509 kubelet[2745]: I1124 06:46:27.242474 2745 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 06:46:27.242722 kubelet[2745]: I1124 06:46:27.242691 2745 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 06:46:27.242862 kubelet[2745]: I1124 06:46:27.242720 2745 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 06:46:27.242862 kubelet[2745]: I1124 06:46:27.242862 2745 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 06:46:27.242972 kubelet[2745]: I1124 06:46:27.242871 2745 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 06:46:27.242972 kubelet[2745]: I1124 06:46:27.242929 2745 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:46:27.243082 kubelet[2745]: I1124 06:46:27.243069 2745 kubelet.go:480] "Attempting to sync node with API server" Nov 24 06:46:27.243104 kubelet[2745]: I1124 06:46:27.243082 2745 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 06:46:27.243104 kubelet[2745]: I1124 06:46:27.243103 2745 kubelet.go:386] "Adding apiserver pod source" Nov 24 06:46:27.243143 kubelet[2745]: I1124 06:46:27.243113 2745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 06:46:27.244133 kubelet[2745]: I1124 06:46:27.244106 2745 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 06:46:27.244652 kubelet[2745]: I1124 06:46:27.244624 2745 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 06:46:27.247293 kubelet[2745]: I1124 06:46:27.247269 2745 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 06:46:27.247340 kubelet[2745]: I1124 06:46:27.247318 2745 server.go:1289] "Started kubelet" Nov 24 06:46:27.251051 kubelet[2745]: I1124 06:46:27.248926 2745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 06:46:27.251051 kubelet[2745]: I1124 06:46:27.249259 2745 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 06:46:27.251051 kubelet[2745]: I1124 06:46:27.249347 2745 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 06:46:27.251051 kubelet[2745]: I1124 06:46:27.249452 2745 reconciler.go:26] "Reconciler: start to sync state" Nov 24 06:46:27.251051 kubelet[2745]: E1124 06:46:27.249732 2745 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 06:46:27.251051 kubelet[2745]: I1124 06:46:27.249780 2745 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 06:46:27.253156 kubelet[2745]: I1124 06:46:27.253127 2745 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 06:46:27.253201 kubelet[2745]: I1124 06:46:27.253179 2745 server.go:317] "Adding debug handlers to kubelet server" Nov 24 06:46:27.254415 kubelet[2745]: I1124 06:46:27.254366 2745 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 06:46:27.254790 kubelet[2745]: I1124 06:46:27.254771 2745 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 06:46:27.259637 kubelet[2745]: I1124 06:46:27.259607 2745 factory.go:223] Registration of the containerd container factory successfully Nov 24 06:46:27.259637 kubelet[2745]: I1124 06:46:27.259631 2745 factory.go:223] Registration of the systemd container factory successfully Nov 24 06:46:27.259752 kubelet[2745]: I1124 06:46:27.259725 2745 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 06:46:27.268455 kubelet[2745]: E1124 06:46:27.268414 2745 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 06:46:27.278018 kubelet[2745]: I1124 06:46:27.277603 2745 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 06:46:27.280542 kubelet[2745]: I1124 06:46:27.280508 2745 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 06:46:27.280542 kubelet[2745]: I1124 06:46:27.280541 2745 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 06:46:27.280756 kubelet[2745]: I1124 06:46:27.280737 2745 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 06:46:27.280756 kubelet[2745]: I1124 06:46:27.280750 2745 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 06:46:27.280815 kubelet[2745]: E1124 06:46:27.280794 2745 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 06:46:27.298573 kubelet[2745]: I1124 06:46:27.298552 2745 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 06:46:27.298573 kubelet[2745]: I1124 06:46:27.298567 2745 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 06:46:27.298642 kubelet[2745]: I1124 06:46:27.298584 2745 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:46:27.298716 kubelet[2745]: I1124 06:46:27.298701 2745 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 06:46:27.298738 kubelet[2745]: I1124 06:46:27.298715 2745 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 06:46:27.298738 kubelet[2745]: I1124 06:46:27.298736 2745 policy_none.go:49] "None policy: Start" Nov 24 06:46:27.298785 kubelet[2745]: I1124 06:46:27.298745 2745 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 06:46:27.298785 kubelet[2745]: I1124 06:46:27.298755 2745 state_mem.go:35] "Initializing new in-memory state store" Nov 24 06:46:27.298859 kubelet[2745]: I1124 06:46:27.298845 2745 state_mem.go:75] "Updated machine memory state" Nov 24 06:46:27.302686 kubelet[2745]: E1124 06:46:27.302645 2745 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 06:46:27.302839 kubelet[2745]: I1124 06:46:27.302822 2745 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 06:46:27.303145 kubelet[2745]: I1124 06:46:27.302834 2745 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 06:46:27.303145 kubelet[2745]: I1124 06:46:27.303045 2745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 06:46:27.304665 kubelet[2745]: E1124 06:46:27.304636 2745 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 06:46:27.381752 kubelet[2745]: I1124 06:46:27.381715 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:27.381990 kubelet[2745]: I1124 06:46:27.381867 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:27.381990 kubelet[2745]: I1124 06:46:27.381956 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:27.404684 kubelet[2745]: I1124 06:46:27.404654 2745 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 06:46:27.409448 kubelet[2745]: I1124 06:46:27.409416 2745 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 24 06:46:27.409613 kubelet[2745]: I1124 06:46:27.409490 2745 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 24 06:46:27.550132 kubelet[2745]: I1124 06:46:27.550075 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fdad7c9b4e38c7df31db7385b9eb45d5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fdad7c9b4e38c7df31db7385b9eb45d5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:27.550265 kubelet[2745]: I1124 06:46:27.550198 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:27.550265 kubelet[2745]: I1124 06:46:27.550223 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:27.550265 kubelet[2745]: I1124 06:46:27.550244 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:27.550265 kubelet[2745]: I1124 06:46:27.550261 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e51b49401d7e125d16957469facd7352-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e51b49401d7e125d16957469facd7352\") " pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:27.550364 kubelet[2745]: I1124 06:46:27.550275 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fdad7c9b4e38c7df31db7385b9eb45d5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fdad7c9b4e38c7df31db7385b9eb45d5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:27.550364 kubelet[2745]: I1124 06:46:27.550293 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fdad7c9b4e38c7df31db7385b9eb45d5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fdad7c9b4e38c7df31db7385b9eb45d5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:27.550364 kubelet[2745]: I1124 06:46:27.550311 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:27.550364 kubelet[2745]: I1124 06:46:27.550328 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:28.244295 kubelet[2745]: I1124 06:46:28.244263 2745 apiserver.go:52] "Watching apiserver" Nov 24 06:46:28.291141 kubelet[2745]: I1124 06:46:28.290916 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:28.295915 kubelet[2745]: E1124 06:46:28.295861 2745 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:28.317393 kubelet[2745]: I1124 06:46:28.317316 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.317284629 podStartE2EDuration="1.317284629s" podCreationTimestamp="2025-11-24 06:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:46:28.314788128 +0000 UTC m=+1.127426926" watchObservedRunningTime="2025-11-24 06:46:28.317284629 +0000 UTC m=+1.129923427" Nov 24 06:46:28.348620 kubelet[2745]: I1124 06:46:28.348563 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.348546184 podStartE2EDuration="1.348546184s" podCreationTimestamp="2025-11-24 06:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:46:28.333085285 +0000 UTC m=+1.145724083" watchObservedRunningTime="2025-11-24 06:46:28.348546184 +0000 UTC m=+1.161184982" Nov 24 06:46:28.348783 kubelet[2745]: I1124 06:46:28.348695 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.348692118 podStartE2EDuration="1.348692118s" podCreationTimestamp="2025-11-24 06:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:46:28.34839454 +0000 UTC m=+1.161033338" watchObservedRunningTime="2025-11-24 06:46:28.348692118 +0000 UTC m=+1.161330916" Nov 24 06:46:28.350003 kubelet[2745]: I1124 06:46:28.349974 2745 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 06:46:31.398659 kubelet[2745]: I1124 06:46:31.398627 2745 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 06:46:31.399241 containerd[1580]: time="2025-11-24T06:46:31.399207168Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 06:46:31.399506 kubelet[2745]: I1124 06:46:31.399394 2745 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 06:46:32.098404 systemd[1]: Created slice kubepods-besteffort-pod5f341e21_f1cf_4c65_8309_26cb0289d771.slice - libcontainer container kubepods-besteffort-pod5f341e21_f1cf_4c65_8309_26cb0289d771.slice. Nov 24 06:46:32.183973 kubelet[2745]: I1124 06:46:32.183907 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f341e21-f1cf-4c65-8309-26cb0289d771-kube-proxy\") pod \"kube-proxy-v4nfv\" (UID: \"5f341e21-f1cf-4c65-8309-26cb0289d771\") " pod="kube-system/kube-proxy-v4nfv" Nov 24 06:46:32.183973 kubelet[2745]: I1124 06:46:32.183955 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f341e21-f1cf-4c65-8309-26cb0289d771-xtables-lock\") pod \"kube-proxy-v4nfv\" (UID: \"5f341e21-f1cf-4c65-8309-26cb0289d771\") " pod="kube-system/kube-proxy-v4nfv" Nov 24 06:46:32.183973 kubelet[2745]: I1124 06:46:32.183970 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f341e21-f1cf-4c65-8309-26cb0289d771-lib-modules\") pod \"kube-proxy-v4nfv\" (UID: \"5f341e21-f1cf-4c65-8309-26cb0289d771\") " pod="kube-system/kube-proxy-v4nfv" Nov 24 06:46:32.184165 kubelet[2745]: I1124 06:46:32.183987 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85k64\" (UniqueName: \"kubernetes.io/projected/5f341e21-f1cf-4c65-8309-26cb0289d771-kube-api-access-85k64\") pod \"kube-proxy-v4nfv\" (UID: \"5f341e21-f1cf-4c65-8309-26cb0289d771\") " pod="kube-system/kube-proxy-v4nfv" Nov 24 06:46:32.415638 containerd[1580]: time="2025-11-24T06:46:32.415526702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v4nfv,Uid:5f341e21-f1cf-4c65-8309-26cb0289d771,Namespace:kube-system,Attempt:0,}" Nov 24 06:46:32.434608 containerd[1580]: time="2025-11-24T06:46:32.434565759Z" level=info msg="connecting to shim 82d72963f9b09cee2981a349e83a111c30ee5c0bbf48c22c0143c2e638b4616e" address="unix:///run/containerd/s/f320372cb7c490f422743963aa8929a07e2a4256bfd7ac7e4f783cf04c3d2a06" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:32.464122 systemd[1]: Started cri-containerd-82d72963f9b09cee2981a349e83a111c30ee5c0bbf48c22c0143c2e638b4616e.scope - libcontainer container 82d72963f9b09cee2981a349e83a111c30ee5c0bbf48c22c0143c2e638b4616e. Nov 24 06:46:32.510096 containerd[1580]: time="2025-11-24T06:46:32.510054924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v4nfv,Uid:5f341e21-f1cf-4c65-8309-26cb0289d771,Namespace:kube-system,Attempt:0,} returns sandbox id \"82d72963f9b09cee2981a349e83a111c30ee5c0bbf48c22c0143c2e638b4616e\"" Nov 24 06:46:32.516594 containerd[1580]: time="2025-11-24T06:46:32.516562733Z" level=info msg="CreateContainer within sandbox \"82d72963f9b09cee2981a349e83a111c30ee5c0bbf48c22c0143c2e638b4616e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 06:46:32.528899 containerd[1580]: time="2025-11-24T06:46:32.528812541Z" level=info msg="Container 3ebac632ee2a28ca1e79149550e9e46da62872d3ba7d6fa3ab72caf2dce7fc4f: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:32.535253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733237326.mount: Deactivated successfully. Nov 24 06:46:32.543730 containerd[1580]: time="2025-11-24T06:46:32.543547047Z" level=info msg="CreateContainer within sandbox \"82d72963f9b09cee2981a349e83a111c30ee5c0bbf48c22c0143c2e638b4616e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3ebac632ee2a28ca1e79149550e9e46da62872d3ba7d6fa3ab72caf2dce7fc4f\"" Nov 24 06:46:32.544525 containerd[1580]: time="2025-11-24T06:46:32.544482903Z" level=info msg="StartContainer for \"3ebac632ee2a28ca1e79149550e9e46da62872d3ba7d6fa3ab72caf2dce7fc4f\"" Nov 24 06:46:32.546455 containerd[1580]: time="2025-11-24T06:46:32.546289971Z" level=info msg="connecting to shim 3ebac632ee2a28ca1e79149550e9e46da62872d3ba7d6fa3ab72caf2dce7fc4f" address="unix:///run/containerd/s/f320372cb7c490f422743963aa8929a07e2a4256bfd7ac7e4f783cf04c3d2a06" protocol=ttrpc version=3 Nov 24 06:46:32.573352 systemd[1]: Started cri-containerd-3ebac632ee2a28ca1e79149550e9e46da62872d3ba7d6fa3ab72caf2dce7fc4f.scope - libcontainer container 3ebac632ee2a28ca1e79149550e9e46da62872d3ba7d6fa3ab72caf2dce7fc4f. Nov 24 06:46:32.582388 systemd[1]: Created slice kubepods-besteffort-pod0b616521_eeb1_4e13_88e4_f3a810bd5641.slice - libcontainer container kubepods-besteffort-pod0b616521_eeb1_4e13_88e4_f3a810bd5641.slice. Nov 24 06:46:32.586996 kubelet[2745]: I1124 06:46:32.586971 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjbgk\" (UniqueName: \"kubernetes.io/projected/0b616521-eeb1-4e13-88e4-f3a810bd5641-kube-api-access-qjbgk\") pod \"tigera-operator-7dcd859c48-hmqgp\" (UID: \"0b616521-eeb1-4e13-88e4-f3a810bd5641\") " pod="tigera-operator/tigera-operator-7dcd859c48-hmqgp" Nov 24 06:46:32.587353 kubelet[2745]: I1124 06:46:32.587000 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0b616521-eeb1-4e13-88e4-f3a810bd5641-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hmqgp\" (UID: \"0b616521-eeb1-4e13-88e4-f3a810bd5641\") " pod="tigera-operator/tigera-operator-7dcd859c48-hmqgp" Nov 24 06:46:32.656179 containerd[1580]: time="2025-11-24T06:46:32.656138573Z" level=info msg="StartContainer for \"3ebac632ee2a28ca1e79149550e9e46da62872d3ba7d6fa3ab72caf2dce7fc4f\" returns successfully" Nov 24 06:46:32.885718 containerd[1580]: time="2025-11-24T06:46:32.885380477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hmqgp,Uid:0b616521-eeb1-4e13-88e4-f3a810bd5641,Namespace:tigera-operator,Attempt:0,}" Nov 24 06:46:32.920917 containerd[1580]: time="2025-11-24T06:46:32.920854011Z" level=info msg="connecting to shim af6d77e9bfe1fc77cced06e377e70af9bd56112c95f7e24aff135ddb93862473" address="unix:///run/containerd/s/2d5c670e64fc59bb02a89021009f056ccddb51ae4f606dbaacde778fa721b478" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:32.955019 systemd[1]: Started cri-containerd-af6d77e9bfe1fc77cced06e377e70af9bd56112c95f7e24aff135ddb93862473.scope - libcontainer container af6d77e9bfe1fc77cced06e377e70af9bd56112c95f7e24aff135ddb93862473. Nov 24 06:46:33.006739 containerd[1580]: time="2025-11-24T06:46:33.006686900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hmqgp,Uid:0b616521-eeb1-4e13-88e4-f3a810bd5641,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"af6d77e9bfe1fc77cced06e377e70af9bd56112c95f7e24aff135ddb93862473\"" Nov 24 06:46:33.010118 containerd[1580]: time="2025-11-24T06:46:33.010069102Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 06:46:34.096759 kubelet[2745]: I1124 06:46:34.096676 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v4nfv" podStartSLOduration=2.0966617 podStartE2EDuration="2.0966617s" podCreationTimestamp="2025-11-24 06:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:46:33.312232221 +0000 UTC m=+6.124871039" watchObservedRunningTime="2025-11-24 06:46:34.0966617 +0000 UTC m=+6.909300498" Nov 24 06:46:34.129433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4279882035.mount: Deactivated successfully. Nov 24 06:46:34.453175 containerd[1580]: time="2025-11-24T06:46:34.453055189Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:34.453919 containerd[1580]: time="2025-11-24T06:46:34.453892732Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 06:46:34.455136 containerd[1580]: time="2025-11-24T06:46:34.455075566Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:34.457181 containerd[1580]: time="2025-11-24T06:46:34.457147351Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:34.457730 containerd[1580]: time="2025-11-24T06:46:34.457689539Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.447591591s" Nov 24 06:46:34.457730 containerd[1580]: time="2025-11-24T06:46:34.457718814Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 06:46:34.463140 containerd[1580]: time="2025-11-24T06:46:34.463105273Z" level=info msg="CreateContainer within sandbox \"af6d77e9bfe1fc77cced06e377e70af9bd56112c95f7e24aff135ddb93862473\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 06:46:34.471707 containerd[1580]: time="2025-11-24T06:46:34.471682777Z" level=info msg="Container 11f995e9dd298710bf554f4af41639d33908fd9268d6b62202d6b743cbc54f58: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:34.478916 containerd[1580]: time="2025-11-24T06:46:34.478892695Z" level=info msg="CreateContainer within sandbox \"af6d77e9bfe1fc77cced06e377e70af9bd56112c95f7e24aff135ddb93862473\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"11f995e9dd298710bf554f4af41639d33908fd9268d6b62202d6b743cbc54f58\"" Nov 24 06:46:34.479761 containerd[1580]: time="2025-11-24T06:46:34.479256271Z" level=info msg="StartContainer for \"11f995e9dd298710bf554f4af41639d33908fd9268d6b62202d6b743cbc54f58\"" Nov 24 06:46:34.480413 containerd[1580]: time="2025-11-24T06:46:34.480380572Z" level=info msg="connecting to shim 11f995e9dd298710bf554f4af41639d33908fd9268d6b62202d6b743cbc54f58" address="unix:///run/containerd/s/2d5c670e64fc59bb02a89021009f056ccddb51ae4f606dbaacde778fa721b478" protocol=ttrpc version=3 Nov 24 06:46:34.509019 systemd[1]: Started cri-containerd-11f995e9dd298710bf554f4af41639d33908fd9268d6b62202d6b743cbc54f58.scope - libcontainer container 11f995e9dd298710bf554f4af41639d33908fd9268d6b62202d6b743cbc54f58. Nov 24 06:46:34.535663 containerd[1580]: time="2025-11-24T06:46:34.535624903Z" level=info msg="StartContainer for \"11f995e9dd298710bf554f4af41639d33908fd9268d6b62202d6b743cbc54f58\" returns successfully" Nov 24 06:46:35.313866 kubelet[2745]: I1124 06:46:35.313622 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hmqgp" podStartSLOduration=1.8633316359999998 podStartE2EDuration="3.313606729s" podCreationTimestamp="2025-11-24 06:46:32 +0000 UTC" firstStartedPulling="2025-11-24 06:46:33.008091842 +0000 UTC m=+5.820730640" lastFinishedPulling="2025-11-24 06:46:34.458366945 +0000 UTC m=+7.271005733" observedRunningTime="2025-11-24 06:46:35.313334379 +0000 UTC m=+8.125973177" watchObservedRunningTime="2025-11-24 06:46:35.313606729 +0000 UTC m=+8.126245527" Nov 24 06:46:39.570341 sudo[1792]: pam_unix(sudo:session): session closed for user root Nov 24 06:46:39.571761 sshd[1791]: Connection closed by 10.0.0.1 port 52230 Nov 24 06:46:39.572460 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:39.580342 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:52230.service: Deactivated successfully. Nov 24 06:46:39.588132 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 06:46:39.590145 systemd[1]: session-7.scope: Consumed 5.464s CPU time, 225.2M memory peak. Nov 24 06:46:39.596757 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Nov 24 06:46:39.597838 systemd-logind[1560]: Removed session 7. Nov 24 06:46:41.898006 update_engine[1568]: I20251124 06:46:41.897930 1568 update_attempter.cc:509] Updating boot flags... Nov 24 06:46:43.612921 systemd[1]: Created slice kubepods-besteffort-pod093cb392_a4e7_43cb_bc43_03e27c391159.slice - libcontainer container kubepods-besteffort-pod093cb392_a4e7_43cb_bc43_03e27c391159.slice. Nov 24 06:46:43.654426 kubelet[2745]: I1124 06:46:43.654372 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/093cb392-a4e7-43cb-bc43-03e27c391159-tigera-ca-bundle\") pod \"calico-typha-75c6b67ddd-zs77h\" (UID: \"093cb392-a4e7-43cb-bc43-03e27c391159\") " pod="calico-system/calico-typha-75c6b67ddd-zs77h" Nov 24 06:46:43.654426 kubelet[2745]: I1124 06:46:43.654424 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/093cb392-a4e7-43cb-bc43-03e27c391159-typha-certs\") pod \"calico-typha-75c6b67ddd-zs77h\" (UID: \"093cb392-a4e7-43cb-bc43-03e27c391159\") " pod="calico-system/calico-typha-75c6b67ddd-zs77h" Nov 24 06:46:43.654426 kubelet[2745]: I1124 06:46:43.654442 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mgxc\" (UniqueName: \"kubernetes.io/projected/093cb392-a4e7-43cb-bc43-03e27c391159-kube-api-access-4mgxc\") pod \"calico-typha-75c6b67ddd-zs77h\" (UID: \"093cb392-a4e7-43cb-bc43-03e27c391159\") " pod="calico-system/calico-typha-75c6b67ddd-zs77h" Nov 24 06:46:43.808427 systemd[1]: Created slice kubepods-besteffort-pod312dce62_772e_4585_bab0_3e71c3bab553.slice - libcontainer container kubepods-besteffort-pod312dce62_772e_4585_bab0_3e71c3bab553.slice. Nov 24 06:46:43.855835 kubelet[2745]: I1124 06:46:43.855724 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/312dce62-772e-4585-bab0-3e71c3bab553-flexvol-driver-host\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856087 kubelet[2745]: I1124 06:46:43.855858 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/312dce62-772e-4585-bab0-3e71c3bab553-tigera-ca-bundle\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856087 kubelet[2745]: I1124 06:46:43.855922 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/312dce62-772e-4585-bab0-3e71c3bab553-var-lib-calico\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856087 kubelet[2745]: I1124 06:46:43.855969 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/312dce62-772e-4585-bab0-3e71c3bab553-cni-bin-dir\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856087 kubelet[2745]: I1124 06:46:43.856004 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/312dce62-772e-4585-bab0-3e71c3bab553-cni-log-dir\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856087 kubelet[2745]: I1124 06:46:43.856036 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/312dce62-772e-4585-bab0-3e71c3bab553-lib-modules\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856383 kubelet[2745]: I1124 06:46:43.856072 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/312dce62-772e-4585-bab0-3e71c3bab553-node-certs\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856383 kubelet[2745]: I1124 06:46:43.856108 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/312dce62-772e-4585-bab0-3e71c3bab553-var-run-calico\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856383 kubelet[2745]: I1124 06:46:43.856151 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/312dce62-772e-4585-bab0-3e71c3bab553-cni-net-dir\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856383 kubelet[2745]: I1124 06:46:43.856204 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz6xl\" (UniqueName: \"kubernetes.io/projected/312dce62-772e-4585-bab0-3e71c3bab553-kube-api-access-vz6xl\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856383 kubelet[2745]: I1124 06:46:43.856250 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/312dce62-772e-4585-bab0-3e71c3bab553-policysync\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.856493 kubelet[2745]: I1124 06:46:43.856287 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/312dce62-772e-4585-bab0-3e71c3bab553-xtables-lock\") pod \"calico-node-rl87q\" (UID: \"312dce62-772e-4585-bab0-3e71c3bab553\") " pod="calico-system/calico-node-rl87q" Nov 24 06:46:43.916793 containerd[1580]: time="2025-11-24T06:46:43.916389901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c6b67ddd-zs77h,Uid:093cb392-a4e7-43cb-bc43-03e27c391159,Namespace:calico-system,Attempt:0,}" Nov 24 06:46:43.938283 containerd[1580]: time="2025-11-24T06:46:43.938207801Z" level=info msg="connecting to shim 768d48057dd42888101d4a0a2ca759e646e6ccf79e2b9343723cc05ca5b75c43" address="unix:///run/containerd/s/d6406cd63774af37f386a18837b58252a8782eac131a461d1aa10c0b132902b4" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:43.966353 kubelet[2745]: E1124 06:46:43.964493 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:43.966353 kubelet[2745]: W1124 06:46:43.964515 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:43.966353 kubelet[2745]: E1124 06:46:43.965353 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:43.966353 kubelet[2745]: E1124 06:46:43.966220 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:43.966353 kubelet[2745]: W1124 06:46:43.966229 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:43.966353 kubelet[2745]: E1124 06:46:43.966241 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:43.965040 systemd[1]: Started cri-containerd-768d48057dd42888101d4a0a2ca759e646e6ccf79e2b9343723cc05ca5b75c43.scope - libcontainer container 768d48057dd42888101d4a0a2ca759e646e6ccf79e2b9343723cc05ca5b75c43. Nov 24 06:46:43.973374 kubelet[2745]: E1124 06:46:43.973173 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:43.973374 kubelet[2745]: W1124 06:46:43.973221 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:43.973374 kubelet[2745]: E1124 06:46:43.973242 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.001591 kubelet[2745]: E1124 06:46:44.001546 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:46:44.021846 containerd[1580]: time="2025-11-24T06:46:44.021807694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c6b67ddd-zs77h,Uid:093cb392-a4e7-43cb-bc43-03e27c391159,Namespace:calico-system,Attempt:0,} returns sandbox id \"768d48057dd42888101d4a0a2ca759e646e6ccf79e2b9343723cc05ca5b75c43\"" Nov 24 06:46:44.023511 containerd[1580]: time="2025-11-24T06:46:44.023474192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 06:46:44.045479 kubelet[2745]: E1124 06:46:44.045457 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.045479 kubelet[2745]: W1124 06:46:44.045475 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.045566 kubelet[2745]: E1124 06:46:44.045492 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.046463 kubelet[2745]: E1124 06:46:44.046012 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.046463 kubelet[2745]: W1124 06:46:44.046029 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.046463 kubelet[2745]: E1124 06:46:44.046039 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.046463 kubelet[2745]: E1124 06:46:44.046243 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.046463 kubelet[2745]: W1124 06:46:44.046257 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.046463 kubelet[2745]: E1124 06:46:44.046266 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.050012 kubelet[2745]: E1124 06:46:44.049979 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.050012 kubelet[2745]: W1124 06:46:44.050001 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.050012 kubelet[2745]: E1124 06:46:44.050022 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.050278 kubelet[2745]: E1124 06:46:44.050261 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.050278 kubelet[2745]: W1124 06:46:44.050272 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.050350 kubelet[2745]: E1124 06:46:44.050281 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.050494 kubelet[2745]: E1124 06:46:44.050469 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.050494 kubelet[2745]: W1124 06:46:44.050485 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.050572 kubelet[2745]: E1124 06:46:44.050499 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.050721 kubelet[2745]: E1124 06:46:44.050696 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.050721 kubelet[2745]: W1124 06:46:44.050708 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.050721 kubelet[2745]: E1124 06:46:44.050718 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.050973 kubelet[2745]: E1124 06:46:44.050955 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.050973 kubelet[2745]: W1124 06:46:44.050968 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.051024 kubelet[2745]: E1124 06:46:44.050977 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.051183 kubelet[2745]: E1124 06:46:44.051166 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.051183 kubelet[2745]: W1124 06:46:44.051179 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.051246 kubelet[2745]: E1124 06:46:44.051187 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.051375 kubelet[2745]: E1124 06:46:44.051354 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.051375 kubelet[2745]: W1124 06:46:44.051373 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.051439 kubelet[2745]: E1124 06:46:44.051382 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.051574 kubelet[2745]: E1124 06:46:44.051554 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.051574 kubelet[2745]: W1124 06:46:44.051572 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.051642 kubelet[2745]: E1124 06:46:44.051582 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.051790 kubelet[2745]: E1124 06:46:44.051774 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.051790 kubelet[2745]: W1124 06:46:44.051786 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.051849 kubelet[2745]: E1124 06:46:44.051798 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.052061 kubelet[2745]: E1124 06:46:44.052039 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.052061 kubelet[2745]: W1124 06:46:44.052057 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.052133 kubelet[2745]: E1124 06:46:44.052065 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.052275 kubelet[2745]: E1124 06:46:44.052250 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.052275 kubelet[2745]: W1124 06:46:44.052261 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.052275 kubelet[2745]: E1124 06:46:44.052270 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.052477 kubelet[2745]: E1124 06:46:44.052461 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.052477 kubelet[2745]: W1124 06:46:44.052473 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.052530 kubelet[2745]: E1124 06:46:44.052481 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.052682 kubelet[2745]: E1124 06:46:44.052666 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.052682 kubelet[2745]: W1124 06:46:44.052678 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.052737 kubelet[2745]: E1124 06:46:44.052689 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.052932 kubelet[2745]: E1124 06:46:44.052916 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.052932 kubelet[2745]: W1124 06:46:44.052929 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.052985 kubelet[2745]: E1124 06:46:44.052937 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.053128 kubelet[2745]: E1124 06:46:44.053112 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.053128 kubelet[2745]: W1124 06:46:44.053124 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.053168 kubelet[2745]: E1124 06:46:44.053131 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.053317 kubelet[2745]: E1124 06:46:44.053302 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.053317 kubelet[2745]: W1124 06:46:44.053314 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.053366 kubelet[2745]: E1124 06:46:44.053324 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.053517 kubelet[2745]: E1124 06:46:44.053500 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.053517 kubelet[2745]: W1124 06:46:44.053512 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.053562 kubelet[2745]: E1124 06:46:44.053522 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.057894 kubelet[2745]: E1124 06:46:44.057839 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.057894 kubelet[2745]: W1124 06:46:44.057863 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.057965 kubelet[2745]: E1124 06:46:44.057932 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.057990 kubelet[2745]: I1124 06:46:44.057961 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvg5\" (UniqueName: \"kubernetes.io/projected/c75bf025-c8e1-47b4-a88c-b817a4677d22-kube-api-access-xgvg5\") pod \"csi-node-driver-cppmt\" (UID: \"c75bf025-c8e1-47b4-a88c-b817a4677d22\") " pod="calico-system/csi-node-driver-cppmt" Nov 24 06:46:44.058179 kubelet[2745]: E1124 06:46:44.058155 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.058179 kubelet[2745]: W1124 06:46:44.058168 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.058179 kubelet[2745]: E1124 06:46:44.058177 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.058254 kubelet[2745]: I1124 06:46:44.058198 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c75bf025-c8e1-47b4-a88c-b817a4677d22-registration-dir\") pod \"csi-node-driver-cppmt\" (UID: \"c75bf025-c8e1-47b4-a88c-b817a4677d22\") " pod="calico-system/csi-node-driver-cppmt" Nov 24 06:46:44.058391 kubelet[2745]: E1124 06:46:44.058368 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.058391 kubelet[2745]: W1124 06:46:44.058379 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.058391 kubelet[2745]: E1124 06:46:44.058389 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.058464 kubelet[2745]: I1124 06:46:44.058408 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c75bf025-c8e1-47b4-a88c-b817a4677d22-kubelet-dir\") pod \"csi-node-driver-cppmt\" (UID: \"c75bf025-c8e1-47b4-a88c-b817a4677d22\") " pod="calico-system/csi-node-driver-cppmt" Nov 24 06:46:44.058665 kubelet[2745]: E1124 06:46:44.058635 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.058665 kubelet[2745]: W1124 06:46:44.058656 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.058711 kubelet[2745]: E1124 06:46:44.058668 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.058909 kubelet[2745]: E1124 06:46:44.058870 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.058909 kubelet[2745]: W1124 06:46:44.058898 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.058909 kubelet[2745]: E1124 06:46:44.058907 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.059134 kubelet[2745]: E1124 06:46:44.059118 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.059134 kubelet[2745]: W1124 06:46:44.059129 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.059187 kubelet[2745]: E1124 06:46:44.059139 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.059340 kubelet[2745]: E1124 06:46:44.059325 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.059340 kubelet[2745]: W1124 06:46:44.059336 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.059383 kubelet[2745]: E1124 06:46:44.059344 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.059548 kubelet[2745]: E1124 06:46:44.059533 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.059548 kubelet[2745]: W1124 06:46:44.059544 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.059596 kubelet[2745]: E1124 06:46:44.059554 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.059596 kubelet[2745]: I1124 06:46:44.059585 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c75bf025-c8e1-47b4-a88c-b817a4677d22-socket-dir\") pod \"csi-node-driver-cppmt\" (UID: \"c75bf025-c8e1-47b4-a88c-b817a4677d22\") " pod="calico-system/csi-node-driver-cppmt" Nov 24 06:46:44.059812 kubelet[2745]: E1124 06:46:44.059793 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.059812 kubelet[2745]: W1124 06:46:44.059808 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.059885 kubelet[2745]: E1124 06:46:44.059819 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.060107 kubelet[2745]: E1124 06:46:44.060082 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.060107 kubelet[2745]: W1124 06:46:44.060095 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.060107 kubelet[2745]: E1124 06:46:44.060103 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.060310 kubelet[2745]: E1124 06:46:44.060295 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.060310 kubelet[2745]: W1124 06:46:44.060305 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.060357 kubelet[2745]: E1124 06:46:44.060313 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.060357 kubelet[2745]: I1124 06:46:44.060336 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c75bf025-c8e1-47b4-a88c-b817a4677d22-varrun\") pod \"csi-node-driver-cppmt\" (UID: \"c75bf025-c8e1-47b4-a88c-b817a4677d22\") " pod="calico-system/csi-node-driver-cppmt" Nov 24 06:46:44.060707 kubelet[2745]: E1124 06:46:44.060665 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.060707 kubelet[2745]: W1124 06:46:44.060699 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.060762 kubelet[2745]: E1124 06:46:44.060724 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.061007 kubelet[2745]: E1124 06:46:44.060977 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.061007 kubelet[2745]: W1124 06:46:44.060992 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.061007 kubelet[2745]: E1124 06:46:44.061002 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.061254 kubelet[2745]: E1124 06:46:44.061234 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.061254 kubelet[2745]: W1124 06:46:44.061250 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.061309 kubelet[2745]: E1124 06:46:44.061261 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.061550 kubelet[2745]: E1124 06:46:44.061496 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.061550 kubelet[2745]: W1124 06:46:44.061519 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.061550 kubelet[2745]: E1124 06:46:44.061530 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.113616 containerd[1580]: time="2025-11-24T06:46:44.113574669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rl87q,Uid:312dce62-772e-4585-bab0-3e71c3bab553,Namespace:calico-system,Attempt:0,}" Nov 24 06:46:44.135192 containerd[1580]: time="2025-11-24T06:46:44.135144511Z" level=info msg="connecting to shim af554fe585b5857159d4142a3fad4ed2536e42ee36236e9143833fb3b10699f5" address="unix:///run/containerd/s/505d202ca6e4e490d745f1b76f8a833541f64eee25ca6eb97ab4e44b9052c6f6" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:44.160027 systemd[1]: Started cri-containerd-af554fe585b5857159d4142a3fad4ed2536e42ee36236e9143833fb3b10699f5.scope - libcontainer container af554fe585b5857159d4142a3fad4ed2536e42ee36236e9143833fb3b10699f5. Nov 24 06:46:44.161156 kubelet[2745]: E1124 06:46:44.161121 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.161270 kubelet[2745]: W1124 06:46:44.161249 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.161298 kubelet[2745]: E1124 06:46:44.161276 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.161666 kubelet[2745]: E1124 06:46:44.161637 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.161790 kubelet[2745]: W1124 06:46:44.161769 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.161790 kubelet[2745]: E1124 06:46:44.161785 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.162267 kubelet[2745]: E1124 06:46:44.162204 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.162267 kubelet[2745]: W1124 06:46:44.162246 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.162267 kubelet[2745]: E1124 06:46:44.162256 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.162767 kubelet[2745]: E1124 06:46:44.162732 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.162767 kubelet[2745]: W1124 06:46:44.162760 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.162767 kubelet[2745]: E1124 06:46:44.162781 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.163684 kubelet[2745]: E1124 06:46:44.163603 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.163684 kubelet[2745]: W1124 06:46:44.163617 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.163684 kubelet[2745]: E1124 06:46:44.163626 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.164427 kubelet[2745]: E1124 06:46:44.164403 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.164427 kubelet[2745]: W1124 06:46:44.164419 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.164555 kubelet[2745]: E1124 06:46:44.164429 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.165356 kubelet[2745]: E1124 06:46:44.165318 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.165356 kubelet[2745]: W1124 06:46:44.165333 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.165356 kubelet[2745]: E1124 06:46:44.165346 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.165654 kubelet[2745]: E1124 06:46:44.165629 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.165686 kubelet[2745]: W1124 06:46:44.165681 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.165710 kubelet[2745]: E1124 06:46:44.165691 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.166189 kubelet[2745]: E1124 06:46:44.165932 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.166189 kubelet[2745]: W1124 06:46:44.165944 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.166189 kubelet[2745]: E1124 06:46:44.165952 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.166189 kubelet[2745]: E1124 06:46:44.166178 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.166189 kubelet[2745]: W1124 06:46:44.166185 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.166189 kubelet[2745]: E1124 06:46:44.166193 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.166712 kubelet[2745]: E1124 06:46:44.166406 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.166712 kubelet[2745]: W1124 06:46:44.166419 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.166712 kubelet[2745]: E1124 06:46:44.166426 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.166712 kubelet[2745]: E1124 06:46:44.166684 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.166712 kubelet[2745]: W1124 06:46:44.166694 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.166712 kubelet[2745]: E1124 06:46:44.166704 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.167125 kubelet[2745]: E1124 06:46:44.166952 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.167125 kubelet[2745]: W1124 06:46:44.166964 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.167125 kubelet[2745]: E1124 06:46:44.167009 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.167794 kubelet[2745]: E1124 06:46:44.167662 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.167794 kubelet[2745]: W1124 06:46:44.167677 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.167794 kubelet[2745]: E1124 06:46:44.167686 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.170183 kubelet[2745]: E1124 06:46:44.169314 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.170183 kubelet[2745]: W1124 06:46:44.169340 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.170290 kubelet[2745]: E1124 06:46:44.170274 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.171239 kubelet[2745]: E1124 06:46:44.171031 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.171476 kubelet[2745]: W1124 06:46:44.171389 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.172025 kubelet[2745]: E1124 06:46:44.171919 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.174562 kubelet[2745]: E1124 06:46:44.174550 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.174761 kubelet[2745]: W1124 06:46:44.174631 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.174761 kubelet[2745]: E1124 06:46:44.174644 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.175166 kubelet[2745]: E1124 06:46:44.175129 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.175325 kubelet[2745]: W1124 06:46:44.175141 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.175325 kubelet[2745]: E1124 06:46:44.175259 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.175693 kubelet[2745]: E1124 06:46:44.175625 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.175693 kubelet[2745]: W1124 06:46:44.175666 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.175693 kubelet[2745]: E1124 06:46:44.175679 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.176215 kubelet[2745]: E1124 06:46:44.176203 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.176335 kubelet[2745]: W1124 06:46:44.176289 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.176335 kubelet[2745]: E1124 06:46:44.176304 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.176856 kubelet[2745]: E1124 06:46:44.176834 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.176950 kubelet[2745]: W1124 06:46:44.176905 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.176950 kubelet[2745]: E1124 06:46:44.176915 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.177385 kubelet[2745]: E1124 06:46:44.177342 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.177385 kubelet[2745]: W1124 06:46:44.177353 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.177484 kubelet[2745]: E1124 06:46:44.177361 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.177944 kubelet[2745]: E1124 06:46:44.177863 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.177944 kubelet[2745]: W1124 06:46:44.177911 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.177944 kubelet[2745]: E1124 06:46:44.177921 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.178438 kubelet[2745]: E1124 06:46:44.178400 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.178438 kubelet[2745]: W1124 06:46:44.178410 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.178438 kubelet[2745]: E1124 06:46:44.178419 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.178954 kubelet[2745]: E1124 06:46:44.178943 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.179069 kubelet[2745]: W1124 06:46:44.179036 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.179069 kubelet[2745]: E1124 06:46:44.179050 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.184919 kubelet[2745]: E1124 06:46:44.184856 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:44.185067 kubelet[2745]: W1124 06:46:44.184868 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:44.185067 kubelet[2745]: E1124 06:46:44.185018 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:44.195304 containerd[1580]: time="2025-11-24T06:46:44.195255118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rl87q,Uid:312dce62-772e-4585-bab0-3e71c3bab553,Namespace:calico-system,Attempt:0,} returns sandbox id \"af554fe585b5857159d4142a3fad4ed2536e42ee36236e9143833fb3b10699f5\"" Nov 24 06:46:45.281689 kubelet[2745]: E1124 06:46:45.281628 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:46:45.366561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022851306.mount: Deactivated successfully. Nov 24 06:46:45.707535 containerd[1580]: time="2025-11-24T06:46:45.707424961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:45.708386 containerd[1580]: time="2025-11-24T06:46:45.708341678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 06:46:45.709642 containerd[1580]: time="2025-11-24T06:46:45.709609379Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:45.711532 containerd[1580]: time="2025-11-24T06:46:45.711483639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:45.712255 containerd[1580]: time="2025-11-24T06:46:45.712214362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.68869794s" Nov 24 06:46:45.712255 containerd[1580]: time="2025-11-24T06:46:45.712255180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 06:46:45.713265 containerd[1580]: time="2025-11-24T06:46:45.713242640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 06:46:45.725470 containerd[1580]: time="2025-11-24T06:46:45.725429171Z" level=info msg="CreateContainer within sandbox \"768d48057dd42888101d4a0a2ca759e646e6ccf79e2b9343723cc05ca5b75c43\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 06:46:45.733635 containerd[1580]: time="2025-11-24T06:46:45.733597342Z" level=info msg="Container e40c7a38bdd84fac9d3f4d16379d9c557c3f95c910d16835840bee963ef3c1a0: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:45.741064 containerd[1580]: time="2025-11-24T06:46:45.741040891Z" level=info msg="CreateContainer within sandbox \"768d48057dd42888101d4a0a2ca759e646e6ccf79e2b9343723cc05ca5b75c43\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e40c7a38bdd84fac9d3f4d16379d9c557c3f95c910d16835840bee963ef3c1a0\"" Nov 24 06:46:45.741986 containerd[1580]: time="2025-11-24T06:46:45.741952577Z" level=info msg="StartContainer for \"e40c7a38bdd84fac9d3f4d16379d9c557c3f95c910d16835840bee963ef3c1a0\"" Nov 24 06:46:45.742977 containerd[1580]: time="2025-11-24T06:46:45.742943965Z" level=info msg="connecting to shim e40c7a38bdd84fac9d3f4d16379d9c557c3f95c910d16835840bee963ef3c1a0" address="unix:///run/containerd/s/d6406cd63774af37f386a18837b58252a8782eac131a461d1aa10c0b132902b4" protocol=ttrpc version=3 Nov 24 06:46:45.762031 systemd[1]: Started cri-containerd-e40c7a38bdd84fac9d3f4d16379d9c557c3f95c910d16835840bee963ef3c1a0.scope - libcontainer container e40c7a38bdd84fac9d3f4d16379d9c557c3f95c910d16835840bee963ef3c1a0. Nov 24 06:46:45.813897 containerd[1580]: time="2025-11-24T06:46:45.813351139Z" level=info msg="StartContainer for \"e40c7a38bdd84fac9d3f4d16379d9c557c3f95c910d16835840bee963ef3c1a0\" returns successfully" Nov 24 06:46:46.334838 kubelet[2745]: I1124 06:46:46.334758 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75c6b67ddd-zs77h" podStartSLOduration=1.644451613 podStartE2EDuration="3.334743533s" podCreationTimestamp="2025-11-24 06:46:43 +0000 UTC" firstStartedPulling="2025-11-24 06:46:44.023006416 +0000 UTC m=+16.835645224" lastFinishedPulling="2025-11-24 06:46:45.713298346 +0000 UTC m=+18.525937144" observedRunningTime="2025-11-24 06:46:46.334079877 +0000 UTC m=+19.146718675" watchObservedRunningTime="2025-11-24 06:46:46.334743533 +0000 UTC m=+19.147382331" Nov 24 06:46:46.369127 kubelet[2745]: E1124 06:46:46.369089 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.369127 kubelet[2745]: W1124 06:46:46.369110 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.369127 kubelet[2745]: E1124 06:46:46.369128 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.369358 kubelet[2745]: E1124 06:46:46.369342 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.369358 kubelet[2745]: W1124 06:46:46.369354 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.369407 kubelet[2745]: E1124 06:46:46.369377 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.369591 kubelet[2745]: E1124 06:46:46.369550 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.369591 kubelet[2745]: W1124 06:46:46.369561 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.369591 kubelet[2745]: E1124 06:46:46.369569 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.369857 kubelet[2745]: E1124 06:46:46.369731 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.369857 kubelet[2745]: W1124 06:46:46.369738 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.369857 kubelet[2745]: E1124 06:46:46.369746 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.369984 kubelet[2745]: E1124 06:46:46.369969 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.369984 kubelet[2745]: W1124 06:46:46.369977 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.369984 kubelet[2745]: E1124 06:46:46.369985 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.370173 kubelet[2745]: E1124 06:46:46.370150 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.370173 kubelet[2745]: W1124 06:46:46.370163 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.370173 kubelet[2745]: E1124 06:46:46.370172 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.370342 kubelet[2745]: E1124 06:46:46.370324 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.370342 kubelet[2745]: W1124 06:46:46.370334 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.370342 kubelet[2745]: E1124 06:46:46.370341 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.370513 kubelet[2745]: E1124 06:46:46.370494 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.370513 kubelet[2745]: W1124 06:46:46.370503 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.370513 kubelet[2745]: E1124 06:46:46.370510 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.370682 kubelet[2745]: E1124 06:46:46.370663 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.370682 kubelet[2745]: W1124 06:46:46.370672 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.370682 kubelet[2745]: E1124 06:46:46.370680 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.370857 kubelet[2745]: E1124 06:46:46.370838 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.370857 kubelet[2745]: W1124 06:46:46.370847 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.370857 kubelet[2745]: E1124 06:46:46.370853 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.371037 kubelet[2745]: E1124 06:46:46.371018 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.371037 kubelet[2745]: W1124 06:46:46.371027 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.371037 kubelet[2745]: E1124 06:46:46.371034 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.371222 kubelet[2745]: E1124 06:46:46.371203 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.371222 kubelet[2745]: W1124 06:46:46.371213 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.371222 kubelet[2745]: E1124 06:46:46.371220 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.371408 kubelet[2745]: E1124 06:46:46.371390 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.371408 kubelet[2745]: W1124 06:46:46.371399 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.371408 kubelet[2745]: E1124 06:46:46.371405 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.371597 kubelet[2745]: E1124 06:46:46.371579 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.371597 kubelet[2745]: W1124 06:46:46.371588 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.371597 kubelet[2745]: E1124 06:46:46.371595 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.371781 kubelet[2745]: E1124 06:46:46.371762 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.371781 kubelet[2745]: W1124 06:46:46.371771 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.371781 kubelet[2745]: E1124 06:46:46.371778 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.382258 kubelet[2745]: E1124 06:46:46.382222 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.382258 kubelet[2745]: W1124 06:46:46.382242 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.382361 kubelet[2745]: E1124 06:46:46.382262 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.382491 kubelet[2745]: E1124 06:46:46.382472 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.382491 kubelet[2745]: W1124 06:46:46.382482 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.382491 kubelet[2745]: E1124 06:46:46.382490 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.382765 kubelet[2745]: E1124 06:46:46.382726 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.382765 kubelet[2745]: W1124 06:46:46.382757 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.382838 kubelet[2745]: E1124 06:46:46.382781 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.383169 kubelet[2745]: E1124 06:46:46.383124 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.383169 kubelet[2745]: W1124 06:46:46.383159 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.383297 kubelet[2745]: E1124 06:46:46.383175 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.383460 kubelet[2745]: E1124 06:46:46.383403 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.383460 kubelet[2745]: W1124 06:46:46.383416 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.383460 kubelet[2745]: E1124 06:46:46.383427 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.383692 kubelet[2745]: E1124 06:46:46.383665 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.383692 kubelet[2745]: W1124 06:46:46.383679 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.383692 kubelet[2745]: E1124 06:46:46.383689 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.383940 kubelet[2745]: E1124 06:46:46.383925 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.383940 kubelet[2745]: W1124 06:46:46.383934 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.383998 kubelet[2745]: E1124 06:46:46.383942 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.384136 kubelet[2745]: E1124 06:46:46.384119 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.384136 kubelet[2745]: W1124 06:46:46.384129 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.384205 kubelet[2745]: E1124 06:46:46.384140 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.384361 kubelet[2745]: E1124 06:46:46.384342 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.384361 kubelet[2745]: W1124 06:46:46.384357 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.384421 kubelet[2745]: E1124 06:46:46.384367 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.384569 kubelet[2745]: E1124 06:46:46.384552 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.384569 kubelet[2745]: W1124 06:46:46.384564 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.384622 kubelet[2745]: E1124 06:46:46.384573 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.384793 kubelet[2745]: E1124 06:46:46.384778 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.384793 kubelet[2745]: W1124 06:46:46.384788 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.384836 kubelet[2745]: E1124 06:46:46.384796 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.385031 kubelet[2745]: E1124 06:46:46.385014 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.385031 kubelet[2745]: W1124 06:46:46.385028 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.385087 kubelet[2745]: E1124 06:46:46.385039 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.385304 kubelet[2745]: E1124 06:46:46.385288 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.385304 kubelet[2745]: W1124 06:46:46.385300 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.385353 kubelet[2745]: E1124 06:46:46.385311 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.385622 kubelet[2745]: E1124 06:46:46.385603 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.385622 kubelet[2745]: W1124 06:46:46.385619 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.385686 kubelet[2745]: E1124 06:46:46.385632 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.385862 kubelet[2745]: E1124 06:46:46.385846 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.385862 kubelet[2745]: W1124 06:46:46.385857 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.385937 kubelet[2745]: E1124 06:46:46.385866 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.386089 kubelet[2745]: E1124 06:46:46.386075 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.386089 kubelet[2745]: W1124 06:46:46.386085 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.386158 kubelet[2745]: E1124 06:46:46.386093 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.386418 kubelet[2745]: E1124 06:46:46.386400 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.386418 kubelet[2745]: W1124 06:46:46.386414 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.386465 kubelet[2745]: E1124 06:46:46.386425 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.386633 kubelet[2745]: E1124 06:46:46.386617 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:46.386633 kubelet[2745]: W1124 06:46:46.386629 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:46.386681 kubelet[2745]: E1124 06:46:46.386638 2745 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:46.985349 containerd[1580]: time="2025-11-24T06:46:46.985302376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:46.986104 containerd[1580]: time="2025-11-24T06:46:46.986075489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 06:46:46.987347 containerd[1580]: time="2025-11-24T06:46:46.987291050Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:46.989193 containerd[1580]: time="2025-11-24T06:46:46.989157883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:46.989635 containerd[1580]: time="2025-11-24T06:46:46.989600321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.276330219s" Nov 24 06:46:46.989667 containerd[1580]: time="2025-11-24T06:46:46.989629547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 06:46:46.993560 containerd[1580]: time="2025-11-24T06:46:46.993537123Z" level=info msg="CreateContainer within sandbox \"af554fe585b5857159d4142a3fad4ed2536e42ee36236e9143833fb3b10699f5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 06:46:47.002421 containerd[1580]: time="2025-11-24T06:46:47.002390950Z" level=info msg="Container b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:47.012763 containerd[1580]: time="2025-11-24T06:46:47.012728155Z" level=info msg="CreateContainer within sandbox \"af554fe585b5857159d4142a3fad4ed2536e42ee36236e9143833fb3b10699f5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8\"" Nov 24 06:46:47.013280 containerd[1580]: time="2025-11-24T06:46:47.013246987Z" level=info msg="StartContainer for \"b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8\"" Nov 24 06:46:47.014726 containerd[1580]: time="2025-11-24T06:46:47.014689365Z" level=info msg="connecting to shim b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8" address="unix:///run/containerd/s/505d202ca6e4e490d745f1b76f8a833541f64eee25ca6eb97ab4e44b9052c6f6" protocol=ttrpc version=3 Nov 24 06:46:47.038012 systemd[1]: Started cri-containerd-b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8.scope - libcontainer container b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8. Nov 24 06:46:47.115214 containerd[1580]: time="2025-11-24T06:46:47.115173534Z" level=info msg="StartContainer for \"b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8\" returns successfully" Nov 24 06:46:47.125370 systemd[1]: cri-containerd-b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8.scope: Deactivated successfully. Nov 24 06:46:47.128274 containerd[1580]: time="2025-11-24T06:46:47.128226958Z" level=info msg="received container exit event container_id:\"b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8\" id:\"b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8\" pid:3452 exited_at:{seconds:1763966807 nanos:127834735}" Nov 24 06:46:47.149038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b90d5c472bb01fafa7ee974b044e686e30cac2fc04d2bea1caea668fe5b63cd8-rootfs.mount: Deactivated successfully. Nov 24 06:46:47.282606 kubelet[2745]: E1124 06:46:47.282217 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:46:47.328104 kubelet[2745]: I1124 06:46:47.328075 2745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 06:46:48.332311 containerd[1580]: time="2025-11-24T06:46:48.332214361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 06:46:49.281436 kubelet[2745]: E1124 06:46:49.281393 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:46:50.593985 containerd[1580]: time="2025-11-24T06:46:50.593927098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:50.594728 containerd[1580]: time="2025-11-24T06:46:50.594696370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 06:46:50.595944 containerd[1580]: time="2025-11-24T06:46:50.595907688Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:50.597936 containerd[1580]: time="2025-11-24T06:46:50.597862180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:50.598448 containerd[1580]: time="2025-11-24T06:46:50.598423550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.266166528s" Nov 24 06:46:50.598492 containerd[1580]: time="2025-11-24T06:46:50.598449729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 06:46:50.602840 containerd[1580]: time="2025-11-24T06:46:50.602809604Z" level=info msg="CreateContainer within sandbox \"af554fe585b5857159d4142a3fad4ed2536e42ee36236e9143833fb3b10699f5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 06:46:50.612569 containerd[1580]: time="2025-11-24T06:46:50.612522840Z" level=info msg="Container 53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:50.622454 containerd[1580]: time="2025-11-24T06:46:50.622406939Z" level=info msg="CreateContainer within sandbox \"af554fe585b5857159d4142a3fad4ed2536e42ee36236e9143833fb3b10699f5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e\"" Nov 24 06:46:50.622977 containerd[1580]: time="2025-11-24T06:46:50.622923465Z" level=info msg="StartContainer for \"53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e\"" Nov 24 06:46:50.624290 containerd[1580]: time="2025-11-24T06:46:50.624267222Z" level=info msg="connecting to shim 53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e" address="unix:///run/containerd/s/505d202ca6e4e490d745f1b76f8a833541f64eee25ca6eb97ab4e44b9052c6f6" protocol=ttrpc version=3 Nov 24 06:46:50.647020 systemd[1]: Started cri-containerd-53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e.scope - libcontainer container 53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e. Nov 24 06:46:50.728303 containerd[1580]: time="2025-11-24T06:46:50.728265563Z" level=info msg="StartContainer for \"53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e\" returns successfully" Nov 24 06:46:51.281418 kubelet[2745]: E1124 06:46:51.281342 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:46:52.019043 systemd[1]: cri-containerd-53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e.scope: Deactivated successfully. Nov 24 06:46:52.019391 systemd[1]: cri-containerd-53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e.scope: Consumed 637ms CPU time, 181.8M memory peak, 3.8M read from disk, 171.3M written to disk. Nov 24 06:46:52.041526 containerd[1580]: time="2025-11-24T06:46:52.041468314Z" level=info msg="received container exit event container_id:\"53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e\" id:\"53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e\" pid:3513 exited_at:{seconds:1763966812 nanos:21328038}" Nov 24 06:46:52.066130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53203df1a46ed1f07bd4b6616ab5b9f4ca27c6d5e192d4f3dfc896055d64329e-rootfs.mount: Deactivated successfully. Nov 24 06:46:52.088808 kubelet[2745]: I1124 06:46:52.088781 2745 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 06:46:52.326863 systemd[1]: Created slice kubepods-burstable-podb86734dd_286f_4394_a7c7_7b3dc56956f1.slice - libcontainer container kubepods-burstable-podb86734dd_286f_4394_a7c7_7b3dc56956f1.slice. Nov 24 06:46:52.354177 systemd[1]: Created slice kubepods-besteffort-podc75bf025_c8e1_47b4_a88c_b817a4677d22.slice - libcontainer container kubepods-besteffort-podc75bf025_c8e1_47b4_a88c_b817a4677d22.slice. Nov 24 06:46:52.358947 containerd[1580]: time="2025-11-24T06:46:52.358900334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cppmt,Uid:c75bf025-c8e1-47b4-a88c-b817a4677d22,Namespace:calico-system,Attempt:0,}" Nov 24 06:46:52.363607 systemd[1]: Created slice kubepods-burstable-podfd1e74d6_669b_452f_99a0_8be45fb721f8.slice - libcontainer container kubepods-burstable-podfd1e74d6_669b_452f_99a0_8be45fb721f8.slice. Nov 24 06:46:52.379704 systemd[1]: Created slice kubepods-besteffort-pod1302b06d_b9df_431a_8827_afe67da7f5a6.slice - libcontainer container kubepods-besteffort-pod1302b06d_b9df_431a_8827_afe67da7f5a6.slice. Nov 24 06:46:52.388227 systemd[1]: Created slice kubepods-besteffort-pod177b7bf1_bf8e_4661_9261_5e6527071df2.slice - libcontainer container kubepods-besteffort-pod177b7bf1_bf8e_4661_9261_5e6527071df2.slice. Nov 24 06:46:52.396241 systemd[1]: Created slice kubepods-besteffort-pod7a2d00df_2419_4869_83bf_460d83fbab1e.slice - libcontainer container kubepods-besteffort-pod7a2d00df_2419_4869_83bf_460d83fbab1e.slice. Nov 24 06:46:52.405485 systemd[1]: Created slice kubepods-besteffort-pod21b695ab_2e0f_4bcd_851e_75e047fb3c73.slice - libcontainer container kubepods-besteffort-pod21b695ab_2e0f_4bcd_851e_75e047fb3c73.slice. Nov 24 06:46:52.416180 systemd[1]: Created slice kubepods-besteffort-pod18910045_0daa_467f_826a_97114985e2d4.slice - libcontainer container kubepods-besteffort-pod18910045_0daa_467f_826a_97114985e2d4.slice. Nov 24 06:46:52.427384 kubelet[2745]: I1124 06:46:52.427340 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfzdl\" (UniqueName: \"kubernetes.io/projected/b86734dd-286f-4394-a7c7-7b3dc56956f1-kube-api-access-hfzdl\") pod \"coredns-674b8bbfcf-rmvgx\" (UID: \"b86734dd-286f-4394-a7c7-7b3dc56956f1\") " pod="kube-system/coredns-674b8bbfcf-rmvgx" Nov 24 06:46:52.427384 kubelet[2745]: I1124 06:46:52.427375 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1e74d6-669b-452f-99a0-8be45fb721f8-config-volume\") pod \"coredns-674b8bbfcf-g9dlx\" (UID: \"fd1e74d6-669b-452f-99a0-8be45fb721f8\") " pod="kube-system/coredns-674b8bbfcf-g9dlx" Nov 24 06:46:52.427748 kubelet[2745]: I1124 06:46:52.427394 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbr8j\" (UniqueName: \"kubernetes.io/projected/fd1e74d6-669b-452f-99a0-8be45fb721f8-kube-api-access-bbr8j\") pod \"coredns-674b8bbfcf-g9dlx\" (UID: \"fd1e74d6-669b-452f-99a0-8be45fb721f8\") " pod="kube-system/coredns-674b8bbfcf-g9dlx" Nov 24 06:46:52.427748 kubelet[2745]: I1124 06:46:52.427490 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18910045-0daa-467f-826a-97114985e2d4-whisker-ca-bundle\") pod \"whisker-7b9b66bf99-chcdl\" (UID: \"18910045-0daa-467f-826a-97114985e2d4\") " pod="calico-system/whisker-7b9b66bf99-chcdl" Nov 24 06:46:52.427748 kubelet[2745]: I1124 06:46:52.427620 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b695ab-2e0f-4bcd-851e-75e047fb3c73-config\") pod \"goldmane-666569f655-mn54z\" (UID: \"21b695ab-2e0f-4bcd-851e-75e047fb3c73\") " pod="calico-system/goldmane-666569f655-mn54z" Nov 24 06:46:52.427748 kubelet[2745]: I1124 06:46:52.427647 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/177b7bf1-bf8e-4661-9261-5e6527071df2-tigera-ca-bundle\") pod \"calico-kube-controllers-7b4bdcc677-xn7rm\" (UID: \"177b7bf1-bf8e-4661-9261-5e6527071df2\") " pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" Nov 24 06:46:52.427748 kubelet[2745]: I1124 06:46:52.427663 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18910045-0daa-467f-826a-97114985e2d4-whisker-backend-key-pair\") pod \"whisker-7b9b66bf99-chcdl\" (UID: \"18910045-0daa-467f-826a-97114985e2d4\") " pod="calico-system/whisker-7b9b66bf99-chcdl" Nov 24 06:46:52.427893 kubelet[2745]: I1124 06:46:52.427696 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l97h\" (UniqueName: \"kubernetes.io/projected/18910045-0daa-467f-826a-97114985e2d4-kube-api-access-5l97h\") pod \"whisker-7b9b66bf99-chcdl\" (UID: \"18910045-0daa-467f-826a-97114985e2d4\") " pod="calico-system/whisker-7b9b66bf99-chcdl" Nov 24 06:46:52.427893 kubelet[2745]: I1124 06:46:52.427713 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-498d7\" (UniqueName: \"kubernetes.io/projected/1302b06d-b9df-431a-8827-afe67da7f5a6-kube-api-access-498d7\") pod \"calico-apiserver-67b7ddc5fb-4vc4t\" (UID: \"1302b06d-b9df-431a-8827-afe67da7f5a6\") " pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" Nov 24 06:46:52.427893 kubelet[2745]: I1124 06:46:52.427727 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nbx8\" (UniqueName: \"kubernetes.io/projected/7a2d00df-2419-4869-83bf-460d83fbab1e-kube-api-access-7nbx8\") pod \"calico-apiserver-67b7ddc5fb-jmhdl\" (UID: \"7a2d00df-2419-4869-83bf-460d83fbab1e\") " pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" Nov 24 06:46:52.427893 kubelet[2745]: I1124 06:46:52.427743 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21b695ab-2e0f-4bcd-851e-75e047fb3c73-goldmane-ca-bundle\") pod \"goldmane-666569f655-mn54z\" (UID: \"21b695ab-2e0f-4bcd-851e-75e047fb3c73\") " pod="calico-system/goldmane-666569f655-mn54z" Nov 24 06:46:52.427893 kubelet[2745]: I1124 06:46:52.427757 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/21b695ab-2e0f-4bcd-851e-75e047fb3c73-goldmane-key-pair\") pod \"goldmane-666569f655-mn54z\" (UID: \"21b695ab-2e0f-4bcd-851e-75e047fb3c73\") " pod="calico-system/goldmane-666569f655-mn54z" Nov 24 06:46:52.428015 kubelet[2745]: I1124 06:46:52.427790 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a2d00df-2419-4869-83bf-460d83fbab1e-calico-apiserver-certs\") pod \"calico-apiserver-67b7ddc5fb-jmhdl\" (UID: \"7a2d00df-2419-4869-83bf-460d83fbab1e\") " pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" Nov 24 06:46:52.428015 kubelet[2745]: I1124 06:46:52.427830 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b86734dd-286f-4394-a7c7-7b3dc56956f1-config-volume\") pod \"coredns-674b8bbfcf-rmvgx\" (UID: \"b86734dd-286f-4394-a7c7-7b3dc56956f1\") " pod="kube-system/coredns-674b8bbfcf-rmvgx" Nov 24 06:46:52.428015 kubelet[2745]: I1124 06:46:52.427851 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpt78\" (UniqueName: \"kubernetes.io/projected/21b695ab-2e0f-4bcd-851e-75e047fb3c73-kube-api-access-rpt78\") pod \"goldmane-666569f655-mn54z\" (UID: \"21b695ab-2e0f-4bcd-851e-75e047fb3c73\") " pod="calico-system/goldmane-666569f655-mn54z" Nov 24 06:46:52.428015 kubelet[2745]: I1124 06:46:52.427867 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1302b06d-b9df-431a-8827-afe67da7f5a6-calico-apiserver-certs\") pod \"calico-apiserver-67b7ddc5fb-4vc4t\" (UID: \"1302b06d-b9df-431a-8827-afe67da7f5a6\") " pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" Nov 24 06:46:52.428015 kubelet[2745]: I1124 06:46:52.427938 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92bkf\" (UniqueName: \"kubernetes.io/projected/177b7bf1-bf8e-4661-9261-5e6527071df2-kube-api-access-92bkf\") pod \"calico-kube-controllers-7b4bdcc677-xn7rm\" (UID: \"177b7bf1-bf8e-4661-9261-5e6527071df2\") " pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" Nov 24 06:46:52.529802 containerd[1580]: time="2025-11-24T06:46:52.529746239Z" level=error msg="Failed to destroy network for sandbox \"1693fcfe5f7720e57483dd02626d6cf2fc34d04f11965985c20b5cd44cbf498b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.531100 containerd[1580]: time="2025-11-24T06:46:52.531044179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cppmt,Uid:c75bf025-c8e1-47b4-a88c-b817a4677d22,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1693fcfe5f7720e57483dd02626d6cf2fc34d04f11965985c20b5cd44cbf498b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.531829 systemd[1]: run-netns-cni\x2d949345a9\x2d03fc\x2d6ad6\x2d660f\x2df26a34e4b78c.mount: Deactivated successfully. Nov 24 06:46:52.541353 kubelet[2745]: E1124 06:46:52.541321 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1693fcfe5f7720e57483dd02626d6cf2fc34d04f11965985c20b5cd44cbf498b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.541483 kubelet[2745]: E1124 06:46:52.541467 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1693fcfe5f7720e57483dd02626d6cf2fc34d04f11965985c20b5cd44cbf498b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cppmt" Nov 24 06:46:52.541547 kubelet[2745]: E1124 06:46:52.541534 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1693fcfe5f7720e57483dd02626d6cf2fc34d04f11965985c20b5cd44cbf498b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cppmt" Nov 24 06:46:52.541652 kubelet[2745]: E1124 06:46:52.541619 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cppmt_calico-system(c75bf025-c8e1-47b4-a88c-b817a4677d22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cppmt_calico-system(c75bf025-c8e1-47b4-a88c-b817a4677d22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1693fcfe5f7720e57483dd02626d6cf2fc34d04f11965985c20b5cd44cbf498b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:46:52.630420 containerd[1580]: time="2025-11-24T06:46:52.630303772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmvgx,Uid:b86734dd-286f-4394-a7c7-7b3dc56956f1,Namespace:kube-system,Attempt:0,}" Nov 24 06:46:52.668379 containerd[1580]: time="2025-11-24T06:46:52.668324331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g9dlx,Uid:fd1e74d6-669b-452f-99a0-8be45fb721f8,Namespace:kube-system,Attempt:0,}" Nov 24 06:46:52.683129 containerd[1580]: time="2025-11-24T06:46:52.683095647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7ddc5fb-4vc4t,Uid:1302b06d-b9df-431a-8827-afe67da7f5a6,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:46:52.684224 containerd[1580]: time="2025-11-24T06:46:52.684179532Z" level=error msg="Failed to destroy network for sandbox \"7d85b6b1e94f2cb8e580a54651b9313767d2beb6196a7f0b91aea92200c855cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.685419 containerd[1580]: time="2025-11-24T06:46:52.685367273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmvgx,Uid:b86734dd-286f-4394-a7c7-7b3dc56956f1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d85b6b1e94f2cb8e580a54651b9313767d2beb6196a7f0b91aea92200c855cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.685774 kubelet[2745]: E1124 06:46:52.685742 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d85b6b1e94f2cb8e580a54651b9313767d2beb6196a7f0b91aea92200c855cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.685894 kubelet[2745]: E1124 06:46:52.685845 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d85b6b1e94f2cb8e580a54651b9313767d2beb6196a7f0b91aea92200c855cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rmvgx" Nov 24 06:46:52.685894 kubelet[2745]: E1124 06:46:52.685868 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d85b6b1e94f2cb8e580a54651b9313767d2beb6196a7f0b91aea92200c855cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rmvgx" Nov 24 06:46:52.686063 kubelet[2745]: E1124 06:46:52.685936 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rmvgx_kube-system(b86734dd-286f-4394-a7c7-7b3dc56956f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rmvgx_kube-system(b86734dd-286f-4394-a7c7-7b3dc56956f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d85b6b1e94f2cb8e580a54651b9313767d2beb6196a7f0b91aea92200c855cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rmvgx" podUID="b86734dd-286f-4394-a7c7-7b3dc56956f1" Nov 24 06:46:52.693288 containerd[1580]: time="2025-11-24T06:46:52.692769923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4bdcc677-xn7rm,Uid:177b7bf1-bf8e-4661-9261-5e6527071df2,Namespace:calico-system,Attempt:0,}" Nov 24 06:46:52.701399 containerd[1580]: time="2025-11-24T06:46:52.701365874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7ddc5fb-jmhdl,Uid:7a2d00df-2419-4869-83bf-460d83fbab1e,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:46:52.714516 containerd[1580]: time="2025-11-24T06:46:52.714193411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mn54z,Uid:21b695ab-2e0f-4bcd-851e-75e047fb3c73,Namespace:calico-system,Attempt:0,}" Nov 24 06:46:52.722277 containerd[1580]: time="2025-11-24T06:46:52.722258952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9b66bf99-chcdl,Uid:18910045-0daa-467f-826a-97114985e2d4,Namespace:calico-system,Attempt:0,}" Nov 24 06:46:52.736465 containerd[1580]: time="2025-11-24T06:46:52.735567115Z" level=error msg="Failed to destroy network for sandbox \"900eaa40a6f6d2a36f7d21c8fc106e8ab2bebf66bcf683cde255cbe46fae6ac5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.738456 containerd[1580]: time="2025-11-24T06:46:52.738417815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g9dlx,Uid:fd1e74d6-669b-452f-99a0-8be45fb721f8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"900eaa40a6f6d2a36f7d21c8fc106e8ab2bebf66bcf683cde255cbe46fae6ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.739157 kubelet[2745]: E1124 06:46:52.738686 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"900eaa40a6f6d2a36f7d21c8fc106e8ab2bebf66bcf683cde255cbe46fae6ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.739157 kubelet[2745]: E1124 06:46:52.738755 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"900eaa40a6f6d2a36f7d21c8fc106e8ab2bebf66bcf683cde255cbe46fae6ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-g9dlx" Nov 24 06:46:52.739157 kubelet[2745]: E1124 06:46:52.738775 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"900eaa40a6f6d2a36f7d21c8fc106e8ab2bebf66bcf683cde255cbe46fae6ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-g9dlx" Nov 24 06:46:52.739268 kubelet[2745]: E1124 06:46:52.738823 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-g9dlx_kube-system(fd1e74d6-669b-452f-99a0-8be45fb721f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-g9dlx_kube-system(fd1e74d6-669b-452f-99a0-8be45fb721f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"900eaa40a6f6d2a36f7d21c8fc106e8ab2bebf66bcf683cde255cbe46fae6ac5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-g9dlx" podUID="fd1e74d6-669b-452f-99a0-8be45fb721f8" Nov 24 06:46:52.767406 containerd[1580]: time="2025-11-24T06:46:52.767287986Z" level=error msg="Failed to destroy network for sandbox \"a2df4a948c04d8cdc77fdedcaf42c9e0f12519876d51e8a666cb5d1c074759bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.770027 containerd[1580]: time="2025-11-24T06:46:52.769991118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7ddc5fb-4vc4t,Uid:1302b06d-b9df-431a-8827-afe67da7f5a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2df4a948c04d8cdc77fdedcaf42c9e0f12519876d51e8a666cb5d1c074759bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.770482 kubelet[2745]: E1124 06:46:52.770443 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2df4a948c04d8cdc77fdedcaf42c9e0f12519876d51e8a666cb5d1c074759bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.770706 kubelet[2745]: E1124 06:46:52.770581 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2df4a948c04d8cdc77fdedcaf42c9e0f12519876d51e8a666cb5d1c074759bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" Nov 24 06:46:52.770706 kubelet[2745]: E1124 06:46:52.770610 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2df4a948c04d8cdc77fdedcaf42c9e0f12519876d51e8a666cb5d1c074759bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" Nov 24 06:46:52.770706 kubelet[2745]: E1124 06:46:52.770672 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67b7ddc5fb-4vc4t_calico-apiserver(1302b06d-b9df-431a-8827-afe67da7f5a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67b7ddc5fb-4vc4t_calico-apiserver(1302b06d-b9df-431a-8827-afe67da7f5a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2df4a948c04d8cdc77fdedcaf42c9e0f12519876d51e8a666cb5d1c074759bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" podUID="1302b06d-b9df-431a-8827-afe67da7f5a6" Nov 24 06:46:52.785669 containerd[1580]: time="2025-11-24T06:46:52.785528609Z" level=error msg="Failed to destroy network for sandbox \"f0b41832adce7ab1dfff20a16afd9ba539b809f8806c9224b022b6635cd52551\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.787480 containerd[1580]: time="2025-11-24T06:46:52.787445316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7ddc5fb-jmhdl,Uid:7a2d00df-2419-4869-83bf-460d83fbab1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0b41832adce7ab1dfff20a16afd9ba539b809f8806c9224b022b6635cd52551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.787855 containerd[1580]: time="2025-11-24T06:46:52.787805105Z" level=error msg="Failed to destroy network for sandbox \"156efb63e2b0367c3c3bee9832a5c498295ed394f4a6451ba8ad7f54de9a5799\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.787907 kubelet[2745]: E1124 06:46:52.787844 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0b41832adce7ab1dfff20a16afd9ba539b809f8806c9224b022b6635cd52551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.787988 kubelet[2745]: E1124 06:46:52.787924 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0b41832adce7ab1dfff20a16afd9ba539b809f8806c9224b022b6635cd52551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" Nov 24 06:46:52.787988 kubelet[2745]: E1124 06:46:52.787970 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0b41832adce7ab1dfff20a16afd9ba539b809f8806c9224b022b6635cd52551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" Nov 24 06:46:52.788074 kubelet[2745]: E1124 06:46:52.788037 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67b7ddc5fb-jmhdl_calico-apiserver(7a2d00df-2419-4869-83bf-460d83fbab1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67b7ddc5fb-jmhdl_calico-apiserver(7a2d00df-2419-4869-83bf-460d83fbab1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0b41832adce7ab1dfff20a16afd9ba539b809f8806c9224b022b6635cd52551\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" podUID="7a2d00df-2419-4869-83bf-460d83fbab1e" Nov 24 06:46:52.793540 containerd[1580]: time="2025-11-24T06:46:52.793501033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4bdcc677-xn7rm,Uid:177b7bf1-bf8e-4661-9261-5e6527071df2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"156efb63e2b0367c3c3bee9832a5c498295ed394f4a6451ba8ad7f54de9a5799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.793734 kubelet[2745]: E1124 06:46:52.793706 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"156efb63e2b0367c3c3bee9832a5c498295ed394f4a6451ba8ad7f54de9a5799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.793768 kubelet[2745]: E1124 06:46:52.793747 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"156efb63e2b0367c3c3bee9832a5c498295ed394f4a6451ba8ad7f54de9a5799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" Nov 24 06:46:52.793806 kubelet[2745]: E1124 06:46:52.793766 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"156efb63e2b0367c3c3bee9832a5c498295ed394f4a6451ba8ad7f54de9a5799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" Nov 24 06:46:52.793848 kubelet[2745]: E1124 06:46:52.793819 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b4bdcc677-xn7rm_calico-system(177b7bf1-bf8e-4661-9261-5e6527071df2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b4bdcc677-xn7rm_calico-system(177b7bf1-bf8e-4661-9261-5e6527071df2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"156efb63e2b0367c3c3bee9832a5c498295ed394f4a6451ba8ad7f54de9a5799\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" podUID="177b7bf1-bf8e-4661-9261-5e6527071df2" Nov 24 06:46:52.795816 containerd[1580]: time="2025-11-24T06:46:52.795706826Z" level=error msg="Failed to destroy network for sandbox \"6558872a965d2d62890d8dd6cd609544cca6fcefb60a5353482f5b9a0521dad6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.797656 containerd[1580]: time="2025-11-24T06:46:52.797604167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mn54z,Uid:21b695ab-2e0f-4bcd-851e-75e047fb3c73,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6558872a965d2d62890d8dd6cd609544cca6fcefb60a5353482f5b9a0521dad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.798140 kubelet[2745]: E1124 06:46:52.797964 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6558872a965d2d62890d8dd6cd609544cca6fcefb60a5353482f5b9a0521dad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.798140 kubelet[2745]: E1124 06:46:52.798026 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6558872a965d2d62890d8dd6cd609544cca6fcefb60a5353482f5b9a0521dad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mn54z" Nov 24 06:46:52.798140 kubelet[2745]: E1124 06:46:52.798051 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6558872a965d2d62890d8dd6cd609544cca6fcefb60a5353482f5b9a0521dad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mn54z" Nov 24 06:46:52.798251 kubelet[2745]: E1124 06:46:52.798100 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mn54z_calico-system(21b695ab-2e0f-4bcd-851e-75e047fb3c73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mn54z_calico-system(21b695ab-2e0f-4bcd-851e-75e047fb3c73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6558872a965d2d62890d8dd6cd609544cca6fcefb60a5353482f5b9a0521dad6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mn54z" podUID="21b695ab-2e0f-4bcd-851e-75e047fb3c73" Nov 24 06:46:52.802989 containerd[1580]: time="2025-11-24T06:46:52.802951458Z" level=error msg="Failed to destroy network for sandbox \"27cacf5a3c66ba994613656fd65ddd1df3e9718514cf21b9fd4e524ed55e95c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.804183 containerd[1580]: time="2025-11-24T06:46:52.804134810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9b66bf99-chcdl,Uid:18910045-0daa-467f-826a-97114985e2d4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27cacf5a3c66ba994613656fd65ddd1df3e9718514cf21b9fd4e524ed55e95c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.804954 kubelet[2745]: E1124 06:46:52.804299 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27cacf5a3c66ba994613656fd65ddd1df3e9718514cf21b9fd4e524ed55e95c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:46:52.804954 kubelet[2745]: E1124 06:46:52.804327 2745 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27cacf5a3c66ba994613656fd65ddd1df3e9718514cf21b9fd4e524ed55e95c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9b66bf99-chcdl" Nov 24 06:46:52.804954 kubelet[2745]: E1124 06:46:52.804354 2745 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27cacf5a3c66ba994613656fd65ddd1df3e9718514cf21b9fd4e524ed55e95c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9b66bf99-chcdl" Nov 24 06:46:52.805085 kubelet[2745]: E1124 06:46:52.804394 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b9b66bf99-chcdl_calico-system(18910045-0daa-467f-826a-97114985e2d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b9b66bf99-chcdl_calico-system(18910045-0daa-467f-826a-97114985e2d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27cacf5a3c66ba994613656fd65ddd1df3e9718514cf21b9fd4e524ed55e95c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b9b66bf99-chcdl" podUID="18910045-0daa-467f-826a-97114985e2d4" Nov 24 06:46:53.350033 containerd[1580]: time="2025-11-24T06:46:53.349959929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 06:46:59.193275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount283170260.mount: Deactivated successfully. Nov 24 06:46:59.853749 containerd[1580]: time="2025-11-24T06:46:59.853696722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:59.855104 containerd[1580]: time="2025-11-24T06:46:59.854851726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 06:46:59.857352 containerd[1580]: time="2025-11-24T06:46:59.857236666Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:59.860907 containerd[1580]: time="2025-11-24T06:46:59.859952800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:59.860907 containerd[1580]: time="2025-11-24T06:46:59.860681131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.510650208s" Nov 24 06:46:59.860907 containerd[1580]: time="2025-11-24T06:46:59.860707771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 06:46:59.883593 containerd[1580]: time="2025-11-24T06:46:59.883543244Z" level=info msg="CreateContainer within sandbox \"af554fe585b5857159d4142a3fad4ed2536e42ee36236e9143833fb3b10699f5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 06:46:59.894220 containerd[1580]: time="2025-11-24T06:46:59.894171685Z" level=info msg="Container 8806c7a7b9d5ed9e5f3a83445266608dd37440823b6404a8850f776881d7ace2: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:59.903860 containerd[1580]: time="2025-11-24T06:46:59.903812285Z" level=info msg="CreateContainer within sandbox \"af554fe585b5857159d4142a3fad4ed2536e42ee36236e9143833fb3b10699f5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8806c7a7b9d5ed9e5f3a83445266608dd37440823b6404a8850f776881d7ace2\"" Nov 24 06:46:59.904420 containerd[1580]: time="2025-11-24T06:46:59.904375345Z" level=info msg="StartContainer for \"8806c7a7b9d5ed9e5f3a83445266608dd37440823b6404a8850f776881d7ace2\"" Nov 24 06:46:59.906285 containerd[1580]: time="2025-11-24T06:46:59.906240296Z" level=info msg="connecting to shim 8806c7a7b9d5ed9e5f3a83445266608dd37440823b6404a8850f776881d7ace2" address="unix:///run/containerd/s/505d202ca6e4e490d745f1b76f8a833541f64eee25ca6eb97ab4e44b9052c6f6" protocol=ttrpc version=3 Nov 24 06:46:59.929038 systemd[1]: Started cri-containerd-8806c7a7b9d5ed9e5f3a83445266608dd37440823b6404a8850f776881d7ace2.scope - libcontainer container 8806c7a7b9d5ed9e5f3a83445266608dd37440823b6404a8850f776881d7ace2. Nov 24 06:47:00.132962 containerd[1580]: time="2025-11-24T06:47:00.132837108Z" level=info msg="StartContainer for \"8806c7a7b9d5ed9e5f3a83445266608dd37440823b6404a8850f776881d7ace2\" returns successfully" Nov 24 06:47:00.155086 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 06:47:00.155186 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 06:47:00.315734 kubelet[2745]: I1124 06:47:00.315689 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18910045-0daa-467f-826a-97114985e2d4-whisker-backend-key-pair\") pod \"18910045-0daa-467f-826a-97114985e2d4\" (UID: \"18910045-0daa-467f-826a-97114985e2d4\") " Nov 24 06:47:00.317137 kubelet[2745]: I1124 06:47:00.316231 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l97h\" (UniqueName: \"kubernetes.io/projected/18910045-0daa-467f-826a-97114985e2d4-kube-api-access-5l97h\") pod \"18910045-0daa-467f-826a-97114985e2d4\" (UID: \"18910045-0daa-467f-826a-97114985e2d4\") " Nov 24 06:47:00.317137 kubelet[2745]: I1124 06:47:00.316260 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18910045-0daa-467f-826a-97114985e2d4-whisker-ca-bundle\") pod \"18910045-0daa-467f-826a-97114985e2d4\" (UID: \"18910045-0daa-467f-826a-97114985e2d4\") " Nov 24 06:47:00.317137 kubelet[2745]: I1124 06:47:00.317095 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18910045-0daa-467f-826a-97114985e2d4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "18910045-0daa-467f-826a-97114985e2d4" (UID: "18910045-0daa-467f-826a-97114985e2d4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 06:47:00.322332 systemd[1]: var-lib-kubelet-pods-18910045\x2d0daa\x2d467f\x2d826a\x2d97114985e2d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5l97h.mount: Deactivated successfully. Nov 24 06:47:00.322441 systemd[1]: var-lib-kubelet-pods-18910045\x2d0daa\x2d467f\x2d826a\x2d97114985e2d4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 06:47:00.325315 kubelet[2745]: I1124 06:47:00.323856 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18910045-0daa-467f-826a-97114985e2d4-kube-api-access-5l97h" (OuterVolumeSpecName: "kube-api-access-5l97h") pod "18910045-0daa-467f-826a-97114985e2d4" (UID: "18910045-0daa-467f-826a-97114985e2d4"). InnerVolumeSpecName "kube-api-access-5l97h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 06:47:00.325410 kubelet[2745]: I1124 06:47:00.325394 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18910045-0daa-467f-826a-97114985e2d4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "18910045-0daa-467f-826a-97114985e2d4" (UID: "18910045-0daa-467f-826a-97114985e2d4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 06:47:00.419004 kubelet[2745]: I1124 06:47:00.417163 2745 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18910045-0daa-467f-826a-97114985e2d4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 24 06:47:00.419004 kubelet[2745]: I1124 06:47:00.417193 2745 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18910045-0daa-467f-826a-97114985e2d4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 24 06:47:00.419004 kubelet[2745]: I1124 06:47:00.417202 2745 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5l97h\" (UniqueName: \"kubernetes.io/projected/18910045-0daa-467f-826a-97114985e2d4-kube-api-access-5l97h\") on node \"localhost\" DevicePath \"\"" Nov 24 06:47:00.417459 systemd[1]: Removed slice kubepods-besteffort-pod18910045_0daa_467f_826a_97114985e2d4.slice - libcontainer container kubepods-besteffort-pod18910045_0daa_467f_826a_97114985e2d4.slice. Nov 24 06:47:00.423532 kubelet[2745]: I1124 06:47:00.423460 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rl87q" podStartSLOduration=1.756772122 podStartE2EDuration="17.42344578s" podCreationTimestamp="2025-11-24 06:46:43 +0000 UTC" firstStartedPulling="2025-11-24 06:46:44.196401281 +0000 UTC m=+17.009040079" lastFinishedPulling="2025-11-24 06:46:59.863074939 +0000 UTC m=+32.675713737" observedRunningTime="2025-11-24 06:47:00.423261252 +0000 UTC m=+33.235900050" watchObservedRunningTime="2025-11-24 06:47:00.42344578 +0000 UTC m=+33.236084578" Nov 24 06:47:00.473297 systemd[1]: Created slice kubepods-besteffort-poda5bd3df9_185f_4936_8d04_99b968d43986.slice - libcontainer container kubepods-besteffort-poda5bd3df9_185f_4936_8d04_99b968d43986.slice. Nov 24 06:47:00.517601 kubelet[2745]: I1124 06:47:00.517559 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5bd3df9-185f-4936-8d04-99b968d43986-whisker-ca-bundle\") pod \"whisker-7d94bc78c9-4zg8l\" (UID: \"a5bd3df9-185f-4936-8d04-99b968d43986\") " pod="calico-system/whisker-7d94bc78c9-4zg8l" Nov 24 06:47:00.517601 kubelet[2745]: I1124 06:47:00.517614 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5bd3df9-185f-4936-8d04-99b968d43986-whisker-backend-key-pair\") pod \"whisker-7d94bc78c9-4zg8l\" (UID: \"a5bd3df9-185f-4936-8d04-99b968d43986\") " pod="calico-system/whisker-7d94bc78c9-4zg8l" Nov 24 06:47:00.517788 kubelet[2745]: I1124 06:47:00.517634 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfhpc\" (UniqueName: \"kubernetes.io/projected/a5bd3df9-185f-4936-8d04-99b968d43986-kube-api-access-dfhpc\") pod \"whisker-7d94bc78c9-4zg8l\" (UID: \"a5bd3df9-185f-4936-8d04-99b968d43986\") " pod="calico-system/whisker-7d94bc78c9-4zg8l" Nov 24 06:47:00.776556 containerd[1580]: time="2025-11-24T06:47:00.776510758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d94bc78c9-4zg8l,Uid:a5bd3df9-185f-4936-8d04-99b968d43986,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:00.907142 systemd-networkd[1478]: cali15482f58d5d: Link UP Nov 24 06:47:00.909496 systemd-networkd[1478]: cali15482f58d5d: Gained carrier Nov 24 06:47:00.923580 containerd[1580]: 2025-11-24 06:47:00.799 [INFO][3892] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 06:47:00.923580 containerd[1580]: 2025-11-24 06:47:00.815 [INFO][3892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0 whisker-7d94bc78c9- calico-system a5bd3df9-185f-4936-8d04-99b968d43986 870 0 2025-11-24 06:47:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d94bc78c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7d94bc78c9-4zg8l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali15482f58d5d [] [] }} ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Namespace="calico-system" Pod="whisker-7d94bc78c9-4zg8l" WorkloadEndpoint="localhost-k8s-whisker--7d94bc78c9--4zg8l-" Nov 24 06:47:00.923580 containerd[1580]: 2025-11-24 06:47:00.815 [INFO][3892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Namespace="calico-system" Pod="whisker-7d94bc78c9-4zg8l" WorkloadEndpoint="localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0" Nov 24 06:47:00.923580 containerd[1580]: 2025-11-24 06:47:00.871 [INFO][3906] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" HandleID="k8s-pod-network.7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Workload="localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0" Nov 24 06:47:00.924318 containerd[1580]: 2025-11-24 06:47:00.871 [INFO][3906] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" HandleID="k8s-pod-network.7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Workload="localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019e3d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7d94bc78c9-4zg8l", "timestamp":"2025-11-24 06:47:00.871519145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:00.924318 containerd[1580]: 2025-11-24 06:47:00.871 [INFO][3906] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:00.924318 containerd[1580]: 2025-11-24 06:47:00.872 [INFO][3906] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:00.924318 containerd[1580]: 2025-11-24 06:47:00.872 [INFO][3906] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:00.924318 containerd[1580]: 2025-11-24 06:47:00.878 [INFO][3906] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" host="localhost" Nov 24 06:47:00.924318 containerd[1580]: 2025-11-24 06:47:00.882 [INFO][3906] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:00.924318 containerd[1580]: 2025-11-24 06:47:00.885 [INFO][3906] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:00.924318 containerd[1580]: 2025-11-24 06:47:00.886 [INFO][3906] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:00.924318 containerd[1580]: 2025-11-24 06:47:00.888 [INFO][3906] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:00.924318 containerd[1580]: 2025-11-24 06:47:00.888 [INFO][3906] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" host="localhost" Nov 24 06:47:00.924616 containerd[1580]: 2025-11-24 06:47:00.889 [INFO][3906] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160 Nov 24 06:47:00.924616 containerd[1580]: 2025-11-24 06:47:00.892 [INFO][3906] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" host="localhost" Nov 24 06:47:00.924616 containerd[1580]: 2025-11-24 06:47:00.897 [INFO][3906] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" host="localhost" Nov 24 06:47:00.924616 containerd[1580]: 2025-11-24 06:47:00.897 [INFO][3906] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" host="localhost" Nov 24 06:47:00.924616 containerd[1580]: 2025-11-24 06:47:00.897 [INFO][3906] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:00.924616 containerd[1580]: 2025-11-24 06:47:00.897 [INFO][3906] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" HandleID="k8s-pod-network.7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Workload="localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0" Nov 24 06:47:00.924742 containerd[1580]: 2025-11-24 06:47:00.900 [INFO][3892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Namespace="calico-system" Pod="whisker-7d94bc78c9-4zg8l" WorkloadEndpoint="localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0", GenerateName:"whisker-7d94bc78c9-", Namespace:"calico-system", SelfLink:"", UID:"a5bd3df9-185f-4936-8d04-99b968d43986", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 47, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d94bc78c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7d94bc78c9-4zg8l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali15482f58d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:00.924742 containerd[1580]: 2025-11-24 06:47:00.900 [INFO][3892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Namespace="calico-system" Pod="whisker-7d94bc78c9-4zg8l" WorkloadEndpoint="localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0" Nov 24 06:47:00.924813 containerd[1580]: 2025-11-24 06:47:00.900 [INFO][3892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15482f58d5d ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Namespace="calico-system" Pod="whisker-7d94bc78c9-4zg8l" WorkloadEndpoint="localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0" Nov 24 06:47:00.924813 containerd[1580]: 2025-11-24 06:47:00.910 [INFO][3892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Namespace="calico-system" Pod="whisker-7d94bc78c9-4zg8l" WorkloadEndpoint="localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0" Nov 24 06:47:00.924860 containerd[1580]: 2025-11-24 06:47:00.910 [INFO][3892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Namespace="calico-system" Pod="whisker-7d94bc78c9-4zg8l" WorkloadEndpoint="localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0", GenerateName:"whisker-7d94bc78c9-", Namespace:"calico-system", SelfLink:"", UID:"a5bd3df9-185f-4936-8d04-99b968d43986", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 47, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d94bc78c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160", Pod:"whisker-7d94bc78c9-4zg8l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali15482f58d5d", MAC:"36:52:27:3b:94:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:00.924923 containerd[1580]: 2025-11-24 06:47:00.920 [INFO][3892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" Namespace="calico-system" Pod="whisker-7d94bc78c9-4zg8l" WorkloadEndpoint="localhost-k8s-whisker--7d94bc78c9--4zg8l-eth0" Nov 24 06:47:01.051617 containerd[1580]: time="2025-11-24T06:47:01.050952056Z" level=info msg="connecting to shim 7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160" address="unix:///run/containerd/s/6339faabc0add487db71b637598bacaae36b0c168aa83019e95bc108c32c9d86" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:01.087005 systemd[1]: Started cri-containerd-7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160.scope - libcontainer container 7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160. Nov 24 06:47:01.100242 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:01.133106 containerd[1580]: time="2025-11-24T06:47:01.133050828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d94bc78c9-4zg8l,Uid:a5bd3df9-185f-4936-8d04-99b968d43986,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e877b197e8f07eb06f29aa374f816c7137a3256dd9f4ec5631119110d09a160\"" Nov 24 06:47:01.134550 containerd[1580]: time="2025-11-24T06:47:01.134516956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 06:47:01.284517 kubelet[2745]: I1124 06:47:01.284478 2745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18910045-0daa-467f-826a-97114985e2d4" path="/var/lib/kubelet/pods/18910045-0daa-467f-826a-97114985e2d4/volumes" Nov 24 06:47:01.507651 containerd[1580]: time="2025-11-24T06:47:01.507606276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:01.584223 containerd[1580]: time="2025-11-24T06:47:01.584168770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 06:47:01.589701 containerd[1580]: time="2025-11-24T06:47:01.589648071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 06:47:01.589966 kubelet[2745]: E1124 06:47:01.589913 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:01.589966 kubelet[2745]: E1124 06:47:01.589971 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:01.592538 kubelet[2745]: E1124 06:47:01.592429 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:99d2583b01ef4bfea6ba45fb6725d95b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dfhpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d94bc78c9-4zg8l_calico-system(a5bd3df9-185f-4936-8d04-99b968d43986): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:01.594306 containerd[1580]: time="2025-11-24T06:47:01.594243678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 06:47:01.900957 containerd[1580]: time="2025-11-24T06:47:01.900809627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:01.901923 containerd[1580]: time="2025-11-24T06:47:01.901891853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 06:47:01.901986 containerd[1580]: time="2025-11-24T06:47:01.901959020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:01.902138 kubelet[2745]: E1124 06:47:01.902087 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:01.902195 kubelet[2745]: E1124 06:47:01.902138 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:01.902307 kubelet[2745]: E1124 06:47:01.902260 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfhpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d94bc78c9-4zg8l_calico-system(a5bd3df9-185f-4936-8d04-99b968d43986): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:01.903593 kubelet[2745]: E1124 06:47:01.903469 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d94bc78c9-4zg8l" podUID="a5bd3df9-185f-4936-8d04-99b968d43986" Nov 24 06:47:02.374075 systemd-networkd[1478]: cali15482f58d5d: Gained IPv6LL Nov 24 06:47:02.415588 kubelet[2745]: E1124 06:47:02.415528 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d94bc78c9-4zg8l" podUID="a5bd3df9-185f-4936-8d04-99b968d43986" Nov 24 06:47:03.282554 containerd[1580]: time="2025-11-24T06:47:03.282465868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cppmt,Uid:c75bf025-c8e1-47b4-a88c-b817a4677d22,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:03.381652 systemd-networkd[1478]: cali402d79a1c44: Link UP Nov 24 06:47:03.381856 systemd-networkd[1478]: cali402d79a1c44: Gained carrier Nov 24 06:47:03.399197 containerd[1580]: 2025-11-24 06:47:03.309 [INFO][4099] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 06:47:03.399197 containerd[1580]: 2025-11-24 06:47:03.319 [INFO][4099] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cppmt-eth0 csi-node-driver- calico-system c75bf025-c8e1-47b4-a88c-b817a4677d22 698 0 2025-11-24 06:46:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cppmt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali402d79a1c44 [] [] }} ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Namespace="calico-system" Pod="csi-node-driver-cppmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cppmt-" Nov 24 06:47:03.399197 containerd[1580]: 2025-11-24 06:47:03.319 [INFO][4099] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Namespace="calico-system" Pod="csi-node-driver-cppmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cppmt-eth0" Nov 24 06:47:03.399197 containerd[1580]: 2025-11-24 06:47:03.344 [INFO][4114] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" HandleID="k8s-pod-network.e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Workload="localhost-k8s-csi--node--driver--cppmt-eth0" Nov 24 06:47:03.399427 containerd[1580]: 2025-11-24 06:47:03.344 [INFO][4114] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" HandleID="k8s-pod-network.e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Workload="localhost-k8s-csi--node--driver--cppmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cppmt", "timestamp":"2025-11-24 06:47:03.344100771 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:03.399427 containerd[1580]: 2025-11-24 06:47:03.344 [INFO][4114] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:03.399427 containerd[1580]: 2025-11-24 06:47:03.344 [INFO][4114] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:03.399427 containerd[1580]: 2025-11-24 06:47:03.344 [INFO][4114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:03.399427 containerd[1580]: 2025-11-24 06:47:03.350 [INFO][4114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" host="localhost" Nov 24 06:47:03.399427 containerd[1580]: 2025-11-24 06:47:03.355 [INFO][4114] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:03.399427 containerd[1580]: 2025-11-24 06:47:03.359 [INFO][4114] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:03.399427 containerd[1580]: 2025-11-24 06:47:03.360 [INFO][4114] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:03.399427 containerd[1580]: 2025-11-24 06:47:03.362 [INFO][4114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:03.399427 containerd[1580]: 2025-11-24 06:47:03.362 [INFO][4114] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" host="localhost" Nov 24 06:47:03.399661 containerd[1580]: 2025-11-24 06:47:03.364 [INFO][4114] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb Nov 24 06:47:03.399661 containerd[1580]: 2025-11-24 06:47:03.368 [INFO][4114] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" host="localhost" Nov 24 06:47:03.399661 containerd[1580]: 2025-11-24 06:47:03.372 [INFO][4114] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" host="localhost" Nov 24 06:47:03.399661 containerd[1580]: 2025-11-24 06:47:03.372 [INFO][4114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" host="localhost" Nov 24 06:47:03.399661 containerd[1580]: 2025-11-24 06:47:03.372 [INFO][4114] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:03.399661 containerd[1580]: 2025-11-24 06:47:03.372 [INFO][4114] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" HandleID="k8s-pod-network.e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Workload="localhost-k8s-csi--node--driver--cppmt-eth0" Nov 24 06:47:03.399785 containerd[1580]: 2025-11-24 06:47:03.375 [INFO][4099] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Namespace="calico-system" Pod="csi-node-driver-cppmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cppmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cppmt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c75bf025-c8e1-47b4-a88c-b817a4677d22", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cppmt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali402d79a1c44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:03.399834 containerd[1580]: 2025-11-24 06:47:03.375 [INFO][4099] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Namespace="calico-system" Pod="csi-node-driver-cppmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cppmt-eth0" Nov 24 06:47:03.399834 containerd[1580]: 2025-11-24 06:47:03.375 [INFO][4099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali402d79a1c44 ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Namespace="calico-system" Pod="csi-node-driver-cppmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cppmt-eth0" Nov 24 06:47:03.399834 containerd[1580]: 2025-11-24 06:47:03.377 [INFO][4099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Namespace="calico-system" Pod="csi-node-driver-cppmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cppmt-eth0" Nov 24 06:47:03.399920 containerd[1580]: 2025-11-24 06:47:03.378 [INFO][4099] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Namespace="calico-system" Pod="csi-node-driver-cppmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cppmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cppmt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c75bf025-c8e1-47b4-a88c-b817a4677d22", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb", Pod:"csi-node-driver-cppmt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali402d79a1c44", MAC:"4a:3c:31:e1:97:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:03.399978 containerd[1580]: 2025-11-24 06:47:03.394 [INFO][4099] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" Namespace="calico-system" Pod="csi-node-driver-cppmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--cppmt-eth0" Nov 24 06:47:03.423029 containerd[1580]: time="2025-11-24T06:47:03.422972364Z" level=info msg="connecting to shim e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb" address="unix:///run/containerd/s/8e47fd664c355e2d721d1840ac8a38b92b19075e17fb3e80139df2692edf872d" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:03.450999 systemd[1]: Started cri-containerd-e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb.scope - libcontainer container e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb. Nov 24 06:47:03.463614 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:03.479234 containerd[1580]: time="2025-11-24T06:47:03.479189145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cppmt,Uid:c75bf025-c8e1-47b4-a88c-b817a4677d22,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9d41ad4a875a6a46e625760ca890a3770b7f12864ae8887dac02bbded2b0bcb\"" Nov 24 06:47:03.480625 containerd[1580]: time="2025-11-24T06:47:03.480601783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 06:47:03.802573 containerd[1580]: time="2025-11-24T06:47:03.802524205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:03.816818 containerd[1580]: time="2025-11-24T06:47:03.816769189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 06:47:03.816818 containerd[1580]: time="2025-11-24T06:47:03.816800037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 06:47:03.817940 kubelet[2745]: E1124 06:47:03.817896 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:03.818249 kubelet[2745]: E1124 06:47:03.817946 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:03.818926 kubelet[2745]: E1124 06:47:03.818867 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgvg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cppmt_calico-system(c75bf025-c8e1-47b4-a88c-b817a4677d22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:03.820745 containerd[1580]: time="2025-11-24T06:47:03.820705973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 06:47:04.148127 containerd[1580]: time="2025-11-24T06:47:04.147982812Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:04.282299 containerd[1580]: time="2025-11-24T06:47:04.282235995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7ddc5fb-jmhdl,Uid:7a2d00df-2419-4869-83bf-460d83fbab1e,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:47:04.384571 containerd[1580]: time="2025-11-24T06:47:04.384491986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 06:47:04.385025 containerd[1580]: time="2025-11-24T06:47:04.384529267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 06:47:04.385054 kubelet[2745]: E1124 06:47:04.384924 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:04.385054 kubelet[2745]: E1124 06:47:04.384971 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:04.385165 kubelet[2745]: E1124 06:47:04.385100 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgvg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cppmt_calico-system(c75bf025-c8e1-47b4-a88c-b817a4677d22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:04.386341 kubelet[2745]: E1124 06:47:04.386287 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:47:04.418998 kubelet[2745]: E1124 06:47:04.418853 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:47:04.937963 systemd-networkd[1478]: cali402d79a1c44: Gained IPv6LL Nov 24 06:47:05.125036 systemd-networkd[1478]: cali40d46c604ab: Link UP Nov 24 06:47:05.126015 systemd-networkd[1478]: cali40d46c604ab: Gained carrier Nov 24 06:47:05.140087 containerd[1580]: 2025-11-24 06:47:04.942 [INFO][4224] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 06:47:05.140087 containerd[1580]: 2025-11-24 06:47:05.063 [INFO][4224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0 calico-apiserver-67b7ddc5fb- calico-apiserver 7a2d00df-2419-4869-83bf-460d83fbab1e 803 0 2025-11-24 06:46:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67b7ddc5fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67b7ddc5fb-jmhdl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali40d46c604ab [] [] }} ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-jmhdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-" Nov 24 06:47:05.140087 containerd[1580]: 2025-11-24 06:47:05.063 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-jmhdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0" Nov 24 06:47:05.140087 containerd[1580]: 2025-11-24 06:47:05.090 [INFO][4240] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" HandleID="k8s-pod-network.f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Workload="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0" Nov 24 06:47:05.140301 containerd[1580]: 2025-11-24 06:47:05.090 [INFO][4240] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" HandleID="k8s-pod-network.f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Workload="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67b7ddc5fb-jmhdl", "timestamp":"2025-11-24 06:47:05.090561757 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:05.140301 containerd[1580]: 2025-11-24 06:47:05.090 [INFO][4240] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:05.140301 containerd[1580]: 2025-11-24 06:47:05.090 [INFO][4240] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:05.140301 containerd[1580]: 2025-11-24 06:47:05.090 [INFO][4240] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:05.140301 containerd[1580]: 2025-11-24 06:47:05.098 [INFO][4240] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" host="localhost" Nov 24 06:47:05.140301 containerd[1580]: 2025-11-24 06:47:05.102 [INFO][4240] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:05.140301 containerd[1580]: 2025-11-24 06:47:05.105 [INFO][4240] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:05.140301 containerd[1580]: 2025-11-24 06:47:05.107 [INFO][4240] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:05.140301 containerd[1580]: 2025-11-24 06:47:05.109 [INFO][4240] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:05.140301 containerd[1580]: 2025-11-24 06:47:05.109 [INFO][4240] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" host="localhost" Nov 24 06:47:05.140627 containerd[1580]: 2025-11-24 06:47:05.110 [INFO][4240] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb Nov 24 06:47:05.140627 containerd[1580]: 2025-11-24 06:47:05.114 [INFO][4240] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" host="localhost" Nov 24 06:47:05.140627 containerd[1580]: 2025-11-24 06:47:05.119 [INFO][4240] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" host="localhost" Nov 24 06:47:05.140627 containerd[1580]: 2025-11-24 06:47:05.119 [INFO][4240] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" host="localhost" Nov 24 06:47:05.140627 containerd[1580]: 2025-11-24 06:47:05.119 [INFO][4240] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:05.140627 containerd[1580]: 2025-11-24 06:47:05.119 [INFO][4240] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" HandleID="k8s-pod-network.f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Workload="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0" Nov 24 06:47:05.140741 containerd[1580]: 2025-11-24 06:47:05.122 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-jmhdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0", GenerateName:"calico-apiserver-67b7ddc5fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a2d00df-2419-4869-83bf-460d83fbab1e", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b7ddc5fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67b7ddc5fb-jmhdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40d46c604ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:05.140796 containerd[1580]: 2025-11-24 06:47:05.123 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-jmhdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0" Nov 24 06:47:05.140796 containerd[1580]: 2025-11-24 06:47:05.123 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40d46c604ab ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-jmhdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0" Nov 24 06:47:05.140796 containerd[1580]: 2025-11-24 06:47:05.125 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-jmhdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0" Nov 24 06:47:05.140865 containerd[1580]: 2025-11-24 06:47:05.125 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-jmhdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0", GenerateName:"calico-apiserver-67b7ddc5fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a2d00df-2419-4869-83bf-460d83fbab1e", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b7ddc5fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb", Pod:"calico-apiserver-67b7ddc5fb-jmhdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali40d46c604ab", MAC:"3a:71:71:ef:c8:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:05.140947 containerd[1580]: 2025-11-24 06:47:05.136 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-jmhdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--jmhdl-eth0" Nov 24 06:47:05.281909 containerd[1580]: time="2025-11-24T06:47:05.281853942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmvgx,Uid:b86734dd-286f-4394-a7c7-7b3dc56956f1,Namespace:kube-system,Attempt:0,}" Nov 24 06:47:05.282159 containerd[1580]: time="2025-11-24T06:47:05.281858130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7ddc5fb-4vc4t,Uid:1302b06d-b9df-431a-8827-afe67da7f5a6,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:47:05.419953 kubelet[2745]: E1124 06:47:05.419907 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:47:06.041061 containerd[1580]: time="2025-11-24T06:47:06.041006302Z" level=info msg="connecting to shim f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb" address="unix:///run/containerd/s/6d98bd9ecd49761260bce8819e2f3bc4d1913cb51cf389b75cee159f409d22cd" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:06.080566 systemd[1]: Started cri-containerd-f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb.scope - libcontainer container f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb. Nov 24 06:47:06.114262 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:06.141311 systemd-networkd[1478]: cali14ca8ddc397: Link UP Nov 24 06:47:06.147998 systemd-networkd[1478]: cali14ca8ddc397: Gained carrier Nov 24 06:47:06.185617 containerd[1580]: 2025-11-24 06:47:06.030 [INFO][4275] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 06:47:06.185617 containerd[1580]: 2025-11-24 06:47:06.048 [INFO][4275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0 coredns-674b8bbfcf- kube-system b86734dd-286f-4394-a7c7-7b3dc56956f1 799 0 2025-11-24 06:46:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-rmvgx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali14ca8ddc397 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmvgx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmvgx-" Nov 24 06:47:06.185617 containerd[1580]: 2025-11-24 06:47:06.048 [INFO][4275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmvgx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0" Nov 24 06:47:06.185617 containerd[1580]: 2025-11-24 06:47:06.089 [INFO][4339] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" HandleID="k8s-pod-network.3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Workload="localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0" Nov 24 06:47:06.185855 containerd[1580]: 2025-11-24 06:47:06.089 [INFO][4339] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" HandleID="k8s-pod-network.3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Workload="localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-rmvgx", "timestamp":"2025-11-24 06:47:06.089561181 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:06.185855 containerd[1580]: 2025-11-24 06:47:06.090 [INFO][4339] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:06.185855 containerd[1580]: 2025-11-24 06:47:06.090 [INFO][4339] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:06.185855 containerd[1580]: 2025-11-24 06:47:06.090 [INFO][4339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:06.185855 containerd[1580]: 2025-11-24 06:47:06.096 [INFO][4339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" host="localhost" Nov 24 06:47:06.185855 containerd[1580]: 2025-11-24 06:47:06.103 [INFO][4339] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:06.185855 containerd[1580]: 2025-11-24 06:47:06.106 [INFO][4339] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:06.185855 containerd[1580]: 2025-11-24 06:47:06.108 [INFO][4339] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:06.185855 containerd[1580]: 2025-11-24 06:47:06.110 [INFO][4339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:06.185855 containerd[1580]: 2025-11-24 06:47:06.110 [INFO][4339] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" host="localhost" Nov 24 06:47:06.186107 containerd[1580]: 2025-11-24 06:47:06.113 [INFO][4339] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e Nov 24 06:47:06.186107 containerd[1580]: 2025-11-24 06:47:06.117 [INFO][4339] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" host="localhost" Nov 24 06:47:06.186107 containerd[1580]: 2025-11-24 06:47:06.123 [INFO][4339] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" host="localhost" Nov 24 06:47:06.186107 containerd[1580]: 2025-11-24 06:47:06.123 [INFO][4339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" host="localhost" Nov 24 06:47:06.186107 containerd[1580]: 2025-11-24 06:47:06.123 [INFO][4339] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:06.186107 containerd[1580]: 2025-11-24 06:47:06.123 [INFO][4339] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" HandleID="k8s-pod-network.3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Workload="localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0" Nov 24 06:47:06.186221 containerd[1580]: 2025-11-24 06:47:06.127 [INFO][4275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmvgx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b86734dd-286f-4394-a7c7-7b3dc56956f1", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-rmvgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14ca8ddc397", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:06.186282 containerd[1580]: 2025-11-24 06:47:06.127 [INFO][4275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmvgx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0" Nov 24 06:47:06.186282 containerd[1580]: 2025-11-24 06:47:06.127 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14ca8ddc397 ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmvgx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0" Nov 24 06:47:06.186282 containerd[1580]: 2025-11-24 06:47:06.151 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmvgx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0" Nov 24 06:47:06.186354 containerd[1580]: 2025-11-24 06:47:06.154 [INFO][4275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmvgx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b86734dd-286f-4394-a7c7-7b3dc56956f1", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e", Pod:"coredns-674b8bbfcf-rmvgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14ca8ddc397", MAC:"96:7e:33:16:9c:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:06.186354 containerd[1580]: 2025-11-24 06:47:06.175 [INFO][4275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmvgx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmvgx-eth0" Nov 24 06:47:06.189828 containerd[1580]: time="2025-11-24T06:47:06.189794160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7ddc5fb-jmhdl,Uid:7a2d00df-2419-4869-83bf-460d83fbab1e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f992e09139cbd5cdd07024a9240a03dbec28f092664ec4787e15d80a0830c0cb\"" Nov 24 06:47:06.192022 containerd[1580]: time="2025-11-24T06:47:06.192004636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:06.216934 containerd[1580]: time="2025-11-24T06:47:06.216855914Z" level=info msg="connecting to shim 3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e" address="unix:///run/containerd/s/6d3f33cb096bcd4fbf7e28a580d5c7577641f49308e129a5242b6b9e41b2650c" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:06.230961 systemd-networkd[1478]: cali81be7e7d580: Link UP Nov 24 06:47:06.231556 systemd-networkd[1478]: cali81be7e7d580: Gained carrier Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.025 [INFO][4282] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.041 [INFO][4282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0 calico-apiserver-67b7ddc5fb- calico-apiserver 1302b06d-b9df-431a-8827-afe67da7f5a6 802 0 2025-11-24 06:46:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67b7ddc5fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67b7ddc5fb-4vc4t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali81be7e7d580 [] [] }} ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-4vc4t" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.042 [INFO][4282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-4vc4t" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.092 [INFO][4328] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" HandleID="k8s-pod-network.b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Workload="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.092 [INFO][4328] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" HandleID="k8s-pod-network.b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Workload="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67b7ddc5fb-4vc4t", "timestamp":"2025-11-24 06:47:06.092537346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.093 [INFO][4328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.123 [INFO][4328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.124 [INFO][4328] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.196 [INFO][4328] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" host="localhost" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.205 [INFO][4328] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.209 [INFO][4328] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.211 [INFO][4328] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.212 [INFO][4328] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.212 [INFO][4328] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" host="localhost" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.214 [INFO][4328] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.218 [INFO][4328] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" host="localhost" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.223 [INFO][4328] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" host="localhost" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.224 [INFO][4328] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" host="localhost" Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.224 [INFO][4328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:06.248216 containerd[1580]: 2025-11-24 06:47:06.224 [INFO][4328] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" HandleID="k8s-pod-network.b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Workload="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0" Nov 24 06:47:06.248743 containerd[1580]: 2025-11-24 06:47:06.227 [INFO][4282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-4vc4t" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0", GenerateName:"calico-apiserver-67b7ddc5fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1302b06d-b9df-431a-8827-afe67da7f5a6", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b7ddc5fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67b7ddc5fb-4vc4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81be7e7d580", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:06.248743 containerd[1580]: 2025-11-24 06:47:06.227 [INFO][4282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-4vc4t" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0" Nov 24 06:47:06.248743 containerd[1580]: 2025-11-24 06:47:06.227 [INFO][4282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81be7e7d580 ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-4vc4t" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0" Nov 24 06:47:06.248743 containerd[1580]: 2025-11-24 06:47:06.231 [INFO][4282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-4vc4t" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0" Nov 24 06:47:06.248743 containerd[1580]: 2025-11-24 06:47:06.232 [INFO][4282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-4vc4t" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0", GenerateName:"calico-apiserver-67b7ddc5fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1302b06d-b9df-431a-8827-afe67da7f5a6", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b7ddc5fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae", Pod:"calico-apiserver-67b7ddc5fb-4vc4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81be7e7d580", MAC:"7a:54:ac:82:42:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:06.248743 containerd[1580]: 2025-11-24 06:47:06.241 [INFO][4282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" Namespace="calico-apiserver" Pod="calico-apiserver-67b7ddc5fb-4vc4t" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7ddc5fb--4vc4t-eth0" Nov 24 06:47:06.254002 systemd[1]: Started cri-containerd-3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e.scope - libcontainer container 3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e. Nov 24 06:47:06.267927 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:06.272889 containerd[1580]: time="2025-11-24T06:47:06.272839387Z" level=info msg="connecting to shim b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae" address="unix:///run/containerd/s/753b8b46ecdde8803a1ec40218aea020d40a6af6b2f68471a13f5788da3b78f4" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:06.283350 containerd[1580]: time="2025-11-24T06:47:06.283301205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4bdcc677-xn7rm,Uid:177b7bf1-bf8e-4661-9261-5e6527071df2,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:06.301123 systemd[1]: Started cri-containerd-b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae.scope - libcontainer container b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae. Nov 24 06:47:06.315068 containerd[1580]: time="2025-11-24T06:47:06.315024591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmvgx,Uid:b86734dd-286f-4394-a7c7-7b3dc56956f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e\"" Nov 24 06:47:06.320513 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:06.320953 containerd[1580]: time="2025-11-24T06:47:06.320921807Z" level=info msg="CreateContainer within sandbox \"3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 06:47:06.334405 containerd[1580]: time="2025-11-24T06:47:06.334352476Z" level=info msg="Container 2a812de2dc29b1d29257eb833d22795b409e8e5ce16b07ebdfc7cd8df8bbb8dc: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:47:06.341063 containerd[1580]: time="2025-11-24T06:47:06.341034708Z" level=info msg="CreateContainer within sandbox \"3c8d8a2b54e2e2b6bee5f4108d70564b0f77724d7c18a20920c83a316b6e694e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a812de2dc29b1d29257eb833d22795b409e8e5ce16b07ebdfc7cd8df8bbb8dc\"" Nov 24 06:47:06.341645 containerd[1580]: time="2025-11-24T06:47:06.341624086Z" level=info msg="StartContainer for \"2a812de2dc29b1d29257eb833d22795b409e8e5ce16b07ebdfc7cd8df8bbb8dc\"" Nov 24 06:47:06.342364 containerd[1580]: time="2025-11-24T06:47:06.342324683Z" level=info msg="connecting to shim 2a812de2dc29b1d29257eb833d22795b409e8e5ce16b07ebdfc7cd8df8bbb8dc" address="unix:///run/containerd/s/6d3f33cb096bcd4fbf7e28a580d5c7577641f49308e129a5242b6b9e41b2650c" protocol=ttrpc version=3 Nov 24 06:47:06.361521 containerd[1580]: time="2025-11-24T06:47:06.361470035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7ddc5fb-4vc4t,Uid:1302b06d-b9df-431a-8827-afe67da7f5a6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b083f7e596eeb1ed448190263f52479242922c84f2fdffa9bed135cddfd201ae\"" Nov 24 06:47:06.365243 systemd[1]: Started cri-containerd-2a812de2dc29b1d29257eb833d22795b409e8e5ce16b07ebdfc7cd8df8bbb8dc.scope - libcontainer container 2a812de2dc29b1d29257eb833d22795b409e8e5ce16b07ebdfc7cd8df8bbb8dc. Nov 24 06:47:06.398060 systemd-networkd[1478]: cali18765fd29b9: Link UP Nov 24 06:47:06.398253 systemd-networkd[1478]: cali18765fd29b9: Gained carrier Nov 24 06:47:06.412125 containerd[1580]: time="2025-11-24T06:47:06.411986430Z" level=info msg="StartContainer for \"2a812de2dc29b1d29257eb833d22795b409e8e5ce16b07ebdfc7cd8df8bbb8dc\" returns successfully" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.317 [INFO][4458] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.331 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0 calico-kube-controllers-7b4bdcc677- calico-system 177b7bf1-bf8e-4661-9261-5e6527071df2 801 0 2025-11-24 06:46:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b4bdcc677 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b4bdcc677-xn7rm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali18765fd29b9 [] [] }} ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Namespace="calico-system" Pod="calico-kube-controllers-7b4bdcc677-xn7rm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.331 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Namespace="calico-system" Pod="calico-kube-controllers-7b4bdcc677-xn7rm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.360 [INFO][4486] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" HandleID="k8s-pod-network.606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Workload="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.361 [INFO][4486] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" HandleID="k8s-pod-network.606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Workload="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013b480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b4bdcc677-xn7rm", "timestamp":"2025-11-24 06:47:06.360232037 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.362 [INFO][4486] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.362 [INFO][4486] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.362 [INFO][4486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.369 [INFO][4486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" host="localhost" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.372 [INFO][4486] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.376 [INFO][4486] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.378 [INFO][4486] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.380 [INFO][4486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.380 [INFO][4486] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" host="localhost" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.381 [INFO][4486] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02 Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.384 [INFO][4486] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" host="localhost" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.389 [INFO][4486] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" host="localhost" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.389 [INFO][4486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" host="localhost" Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.389 [INFO][4486] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:06.416458 containerd[1580]: 2025-11-24 06:47:06.389 [INFO][4486] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" HandleID="k8s-pod-network.606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Workload="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0" Nov 24 06:47:06.417009 containerd[1580]: 2025-11-24 06:47:06.395 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Namespace="calico-system" Pod="calico-kube-controllers-7b4bdcc677-xn7rm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0", GenerateName:"calico-kube-controllers-7b4bdcc677-", Namespace:"calico-system", SelfLink:"", UID:"177b7bf1-bf8e-4661-9261-5e6527071df2", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4bdcc677", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b4bdcc677-xn7rm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18765fd29b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:06.417009 containerd[1580]: 2025-11-24 06:47:06.395 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Namespace="calico-system" Pod="calico-kube-controllers-7b4bdcc677-xn7rm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0" Nov 24 06:47:06.417009 containerd[1580]: 2025-11-24 06:47:06.395 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18765fd29b9 ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Namespace="calico-system" Pod="calico-kube-controllers-7b4bdcc677-xn7rm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0" Nov 24 06:47:06.417009 containerd[1580]: 2025-11-24 06:47:06.398 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Namespace="calico-system" Pod="calico-kube-controllers-7b4bdcc677-xn7rm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0" Nov 24 06:47:06.417009 containerd[1580]: 2025-11-24 06:47:06.399 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Namespace="calico-system" Pod="calico-kube-controllers-7b4bdcc677-xn7rm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0", GenerateName:"calico-kube-controllers-7b4bdcc677-", Namespace:"calico-system", SelfLink:"", UID:"177b7bf1-bf8e-4661-9261-5e6527071df2", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4bdcc677", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02", Pod:"calico-kube-controllers-7b4bdcc677-xn7rm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18765fd29b9", MAC:"b2:24:d0:91:80:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:06.417009 containerd[1580]: 2025-11-24 06:47:06.413 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" Namespace="calico-system" Pod="calico-kube-controllers-7b4bdcc677-xn7rm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4bdcc677--xn7rm-eth0" Nov 24 06:47:06.433676 kubelet[2745]: I1124 06:47:06.433623 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rmvgx" podStartSLOduration=34.433610672 podStartE2EDuration="34.433610672s" podCreationTimestamp="2025-11-24 06:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:47:06.432920635 +0000 UTC m=+39.245559423" watchObservedRunningTime="2025-11-24 06:47:06.433610672 +0000 UTC m=+39.246249470" Nov 24 06:47:06.438411 containerd[1580]: time="2025-11-24T06:47:06.438211050Z" level=info msg="connecting to shim 606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02" address="unix:///run/containerd/s/eeafa0089fc3145cd7f9db59da691ee78792e31dbf605c1b5f9dd3c0390ec013" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:06.467125 systemd[1]: Started cri-containerd-606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02.scope - libcontainer container 606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02. Nov 24 06:47:06.490100 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:06.525583 containerd[1580]: time="2025-11-24T06:47:06.525536854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:06.589928 containerd[1580]: time="2025-11-24T06:47:06.589789924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4bdcc677-xn7rm,Uid:177b7bf1-bf8e-4661-9261-5e6527071df2,Namespace:calico-system,Attempt:0,} returns sandbox id \"606b659b55be29128b034e4cb6885a10d6161fe25249c6cbeb3a8da8e938ac02\"" Nov 24 06:47:06.590041 containerd[1580]: time="2025-11-24T06:47:06.589990241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:06.590080 containerd[1580]: time="2025-11-24T06:47:06.590048581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:06.590272 kubelet[2745]: E1124 06:47:06.590202 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:06.590272 kubelet[2745]: E1124 06:47:06.590245 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:06.590768 containerd[1580]: time="2025-11-24T06:47:06.590478780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:06.592774 kubelet[2745]: E1124 06:47:06.592715 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nbx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67b7ddc5fb-jmhdl_calico-apiserver(7a2d00df-2419-4869-83bf-460d83fbab1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:06.594180 kubelet[2745]: E1124 06:47:06.593925 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" podUID="7a2d00df-2419-4869-83bf-460d83fbab1e" Nov 24 06:47:06.608665 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:60228.service - OpenSSH per-connection server daemon (10.0.0.1:60228). Nov 24 06:47:06.684044 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 60228 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:06.685567 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:06.689421 systemd-logind[1560]: New session 8 of user core. Nov 24 06:47:06.700000 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 06:47:06.829313 sshd[4597]: Connection closed by 10.0.0.1 port 60228 Nov 24 06:47:06.829630 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:06.833927 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:60228.service: Deactivated successfully. Nov 24 06:47:06.835731 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 06:47:06.836402 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Nov 24 06:47:06.837771 systemd-logind[1560]: Removed session 8. Nov 24 06:47:06.930402 containerd[1580]: time="2025-11-24T06:47:06.930295613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:06.932212 containerd[1580]: time="2025-11-24T06:47:06.932174215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:06.932270 containerd[1580]: time="2025-11-24T06:47:06.932244558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:06.932428 kubelet[2745]: E1124 06:47:06.932396 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:06.932519 kubelet[2745]: E1124 06:47:06.932440 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:06.932725 containerd[1580]: time="2025-11-24T06:47:06.932692119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 06:47:06.933035 kubelet[2745]: E1124 06:47:06.932996 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-498d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67b7ddc5fb-4vc4t_calico-apiserver(1302b06d-b9df-431a-8827-afe67da7f5a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:06.934268 kubelet[2745]: E1124 06:47:06.934217 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" podUID="1302b06d-b9df-431a-8827-afe67da7f5a6" Nov 24 06:47:06.982046 systemd-networkd[1478]: cali40d46c604ab: Gained IPv6LL Nov 24 06:47:07.246650 containerd[1580]: time="2025-11-24T06:47:07.246534397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:07.248023 containerd[1580]: time="2025-11-24T06:47:07.247985084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 06:47:07.248090 containerd[1580]: time="2025-11-24T06:47:07.248030810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:07.248194 kubelet[2745]: E1124 06:47:07.248162 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:07.248252 kubelet[2745]: E1124 06:47:07.248203 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:07.248364 kubelet[2745]: E1124 06:47:07.248314 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92bkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b4bdcc677-xn7rm_calico-system(177b7bf1-bf8e-4661-9261-5e6527071df2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:07.249537 kubelet[2745]: E1124 06:47:07.249483 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" podUID="177b7bf1-bf8e-4661-9261-5e6527071df2" Nov 24 06:47:07.283019 containerd[1580]: time="2025-11-24T06:47:07.282957418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g9dlx,Uid:fd1e74d6-669b-452f-99a0-8be45fb721f8,Namespace:kube-system,Attempt:0,}" Nov 24 06:47:07.283139 containerd[1580]: time="2025-11-24T06:47:07.283105076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mn54z,Uid:21b695ab-2e0f-4bcd-851e-75e047fb3c73,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:07.391697 systemd-networkd[1478]: cali619f2cf9373: Link UP Nov 24 06:47:07.392920 systemd-networkd[1478]: cali619f2cf9373: Gained carrier Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.324 [INFO][4648] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.334 [INFO][4648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--mn54z-eth0 goldmane-666569f655- calico-system 21b695ab-2e0f-4bcd-851e-75e047fb3c73 805 0 2025-11-24 06:46:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-mn54z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali619f2cf9373 [] [] }} ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Namespace="calico-system" Pod="goldmane-666569f655-mn54z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mn54z-" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.335 [INFO][4648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Namespace="calico-system" Pod="goldmane-666569f655-mn54z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mn54z-eth0" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.359 [INFO][4668] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" HandleID="k8s-pod-network.facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Workload="localhost-k8s-goldmane--666569f655--mn54z-eth0" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.359 [INFO][4668] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" HandleID="k8s-pod-network.facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Workload="localhost-k8s-goldmane--666569f655--mn54z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043a0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-mn54z", "timestamp":"2025-11-24 06:47:07.359675618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.360 [INFO][4668] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.360 [INFO][4668] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.360 [INFO][4668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.366 [INFO][4668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" host="localhost" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.369 [INFO][4668] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.373 [INFO][4668] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.375 [INFO][4668] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.377 [INFO][4668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.377 [INFO][4668] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" host="localhost" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.378 [INFO][4668] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2 Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.381 [INFO][4668] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" host="localhost" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.386 [INFO][4668] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" host="localhost" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.386 [INFO][4668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" host="localhost" Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.386 [INFO][4668] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:07.405043 containerd[1580]: 2025-11-24 06:47:07.386 [INFO][4668] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" HandleID="k8s-pod-network.facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Workload="localhost-k8s-goldmane--666569f655--mn54z-eth0" Nov 24 06:47:07.405672 containerd[1580]: 2025-11-24 06:47:07.389 [INFO][4648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Namespace="calico-system" Pod="goldmane-666569f655-mn54z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mn54z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mn54z-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"21b695ab-2e0f-4bcd-851e-75e047fb3c73", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-mn54z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali619f2cf9373", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:07.405672 containerd[1580]: 2025-11-24 06:47:07.389 [INFO][4648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Namespace="calico-system" Pod="goldmane-666569f655-mn54z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mn54z-eth0" Nov 24 06:47:07.405672 containerd[1580]: 2025-11-24 06:47:07.389 [INFO][4648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali619f2cf9373 ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Namespace="calico-system" Pod="goldmane-666569f655-mn54z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mn54z-eth0" Nov 24 06:47:07.405672 containerd[1580]: 2025-11-24 06:47:07.391 [INFO][4648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Namespace="calico-system" Pod="goldmane-666569f655-mn54z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mn54z-eth0" Nov 24 06:47:07.405672 containerd[1580]: 2025-11-24 06:47:07.392 [INFO][4648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Namespace="calico-system" Pod="goldmane-666569f655-mn54z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mn54z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mn54z-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"21b695ab-2e0f-4bcd-851e-75e047fb3c73", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2", Pod:"goldmane-666569f655-mn54z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali619f2cf9373", MAC:"aa:27:99:5a:6e:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:07.405672 containerd[1580]: 2025-11-24 06:47:07.401 [INFO][4648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" Namespace="calico-system" Pod="goldmane-666569f655-mn54z" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mn54z-eth0" Nov 24 06:47:07.431598 kubelet[2745]: E1124 06:47:07.431543 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" podUID="177b7bf1-bf8e-4661-9261-5e6527071df2" Nov 24 06:47:07.432424 kubelet[2745]: E1124 06:47:07.432192 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" podUID="1302b06d-b9df-431a-8827-afe67da7f5a6" Nov 24 06:47:07.432859 kubelet[2745]: E1124 06:47:07.432824 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" podUID="7a2d00df-2419-4869-83bf-460d83fbab1e" Nov 24 06:47:07.440953 containerd[1580]: time="2025-11-24T06:47:07.440915566Z" level=info msg="connecting to shim facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2" address="unix:///run/containerd/s/81592bbc88e8f43a2645c04f4c005a5682ed7f5bb70ba698091c56615f1b6827" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:07.469305 systemd[1]: Started cri-containerd-facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2.scope - libcontainer container facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2. Nov 24 06:47:07.496829 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:07.516687 systemd-networkd[1478]: cali087fb0de0e4: Link UP Nov 24 06:47:07.519855 systemd-networkd[1478]: cali087fb0de0e4: Gained carrier Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.324 [INFO][4638] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.333 [INFO][4638] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0 coredns-674b8bbfcf- kube-system fd1e74d6-669b-452f-99a0-8be45fb721f8 800 0 2025-11-24 06:46:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-g9dlx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali087fb0de0e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Namespace="kube-system" Pod="coredns-674b8bbfcf-g9dlx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g9dlx-" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.333 [INFO][4638] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Namespace="kube-system" Pod="coredns-674b8bbfcf-g9dlx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.362 [INFO][4666] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" HandleID="k8s-pod-network.40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Workload="localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.362 [INFO][4666] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" HandleID="k8s-pod-network.40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Workload="localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033a2c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-g9dlx", "timestamp":"2025-11-24 06:47:07.362712516 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.363 [INFO][4666] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.386 [INFO][4666] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.386 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.468 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" host="localhost" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.483 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.489 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.496 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.500 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.500 [INFO][4666] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" host="localhost" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.502 [INFO][4666] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.505 [INFO][4666] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" host="localhost" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.511 [INFO][4666] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" host="localhost" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.511 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" host="localhost" Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.511 [INFO][4666] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:07.538504 containerd[1580]: 2025-11-24 06:47:07.511 [INFO][4666] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" HandleID="k8s-pod-network.40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Workload="localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0" Nov 24 06:47:07.539101 containerd[1580]: 2025-11-24 06:47:07.514 [INFO][4638] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Namespace="kube-system" Pod="coredns-674b8bbfcf-g9dlx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd1e74d6-669b-452f-99a0-8be45fb721f8", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-g9dlx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali087fb0de0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:07.539101 containerd[1580]: 2025-11-24 06:47:07.514 [INFO][4638] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Namespace="kube-system" Pod="coredns-674b8bbfcf-g9dlx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0" Nov 24 06:47:07.539101 containerd[1580]: 2025-11-24 06:47:07.514 [INFO][4638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali087fb0de0e4 ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Namespace="kube-system" Pod="coredns-674b8bbfcf-g9dlx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0" Nov 24 06:47:07.539101 containerd[1580]: 2025-11-24 06:47:07.521 [INFO][4638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Namespace="kube-system" Pod="coredns-674b8bbfcf-g9dlx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0" Nov 24 06:47:07.539101 containerd[1580]: 2025-11-24 06:47:07.522 [INFO][4638] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Namespace="kube-system" Pod="coredns-674b8bbfcf-g9dlx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd1e74d6-669b-452f-99a0-8be45fb721f8", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e", Pod:"coredns-674b8bbfcf-g9dlx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali087fb0de0e4", MAC:"da:ef:9a:32:cf:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:07.539101 containerd[1580]: 2025-11-24 06:47:07.533 [INFO][4638] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" Namespace="kube-system" Pod="coredns-674b8bbfcf-g9dlx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--g9dlx-eth0" Nov 24 06:47:07.542084 containerd[1580]: time="2025-11-24T06:47:07.542056100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mn54z,Uid:21b695ab-2e0f-4bcd-851e-75e047fb3c73,Namespace:calico-system,Attempt:0,} returns sandbox id \"facfb7cf75ec934c9894b89b841c8d151c4fb3d969ccaf178b6088568ba56ed2\"" Nov 24 06:47:07.543582 containerd[1580]: time="2025-11-24T06:47:07.543517326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 06:47:07.563739 containerd[1580]: time="2025-11-24T06:47:07.563697116Z" level=info msg="connecting to shim 40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e" address="unix:///run/containerd/s/06bdbcb83f712fd1465da633206f4da54378d56639198e265a21b202496b38ae" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:07.595011 systemd[1]: Started cri-containerd-40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e.scope - libcontainer container 40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e. Nov 24 06:47:07.646349 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:07.686904 containerd[1580]: time="2025-11-24T06:47:07.686836279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g9dlx,Uid:fd1e74d6-669b-452f-99a0-8be45fb721f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e\"" Nov 24 06:47:07.693107 containerd[1580]: time="2025-11-24T06:47:07.693004653Z" level=info msg="CreateContainer within sandbox \"40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 06:47:07.703780 containerd[1580]: time="2025-11-24T06:47:07.703152287Z" level=info msg="Container 2910c91274d4ce3da64da8ab1d540e527a7c81d44207986c344e890d8654de15: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:47:07.714258 containerd[1580]: time="2025-11-24T06:47:07.714209119Z" level=info msg="CreateContainer within sandbox \"40967a0fdc8f884a3bdb06706ba90ba0d06912079b3ef25193415e4cfd64482e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2910c91274d4ce3da64da8ab1d540e527a7c81d44207986c344e890d8654de15\"" Nov 24 06:47:07.716515 containerd[1580]: time="2025-11-24T06:47:07.716464027Z" level=info msg="StartContainer for \"2910c91274d4ce3da64da8ab1d540e527a7c81d44207986c344e890d8654de15\"" Nov 24 06:47:07.718385 containerd[1580]: time="2025-11-24T06:47:07.718353980Z" level=info msg="connecting to shim 2910c91274d4ce3da64da8ab1d540e527a7c81d44207986c344e890d8654de15" address="unix:///run/containerd/s/06bdbcb83f712fd1465da633206f4da54378d56639198e265a21b202496b38ae" protocol=ttrpc version=3 Nov 24 06:47:07.738002 systemd[1]: Started cri-containerd-2910c91274d4ce3da64da8ab1d540e527a7c81d44207986c344e890d8654de15.scope - libcontainer container 2910c91274d4ce3da64da8ab1d540e527a7c81d44207986c344e890d8654de15. Nov 24 06:47:07.750047 systemd-networkd[1478]: cali14ca8ddc397: Gained IPv6LL Nov 24 06:47:07.777715 containerd[1580]: time="2025-11-24T06:47:07.777683335Z" level=info msg="StartContainer for \"2910c91274d4ce3da64da8ab1d540e527a7c81d44207986c344e890d8654de15\" returns successfully" Nov 24 06:47:07.878055 systemd-networkd[1478]: cali18765fd29b9: Gained IPv6LL Nov 24 06:47:07.887142 containerd[1580]: time="2025-11-24T06:47:07.887101427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:07.888399 containerd[1580]: time="2025-11-24T06:47:07.888341578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 06:47:07.888560 containerd[1580]: time="2025-11-24T06:47:07.888415648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:07.888603 kubelet[2745]: E1124 06:47:07.888563 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:07.888956 kubelet[2745]: E1124 06:47:07.888614 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:07.888956 kubelet[2745]: E1124 06:47:07.888749 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpt78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mn54z_calico-system(21b695ab-2e0f-4bcd-851e-75e047fb3c73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:07.889981 kubelet[2745]: E1124 06:47:07.889935 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mn54z" podUID="21b695ab-2e0f-4bcd-851e-75e047fb3c73" Nov 24 06:47:08.070101 systemd-networkd[1478]: cali81be7e7d580: Gained IPv6LL Nov 24 06:47:08.434813 kubelet[2745]: E1124 06:47:08.434688 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" podUID="177b7bf1-bf8e-4661-9261-5e6527071df2" Nov 24 06:47:08.435336 kubelet[2745]: E1124 06:47:08.435271 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mn54z" podUID="21b695ab-2e0f-4bcd-851e-75e047fb3c73" Nov 24 06:47:08.445155 kubelet[2745]: I1124 06:47:08.443972 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g9dlx" podStartSLOduration=36.443952016 podStartE2EDuration="36.443952016s" podCreationTimestamp="2025-11-24 06:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:47:08.443919745 +0000 UTC m=+41.256558543" watchObservedRunningTime="2025-11-24 06:47:08.443952016 +0000 UTC m=+41.256590834" Nov 24 06:47:08.646055 systemd-networkd[1478]: cali619f2cf9373: Gained IPv6LL Nov 24 06:47:08.646432 systemd-networkd[1478]: cali087fb0de0e4: Gained IPv6LL Nov 24 06:47:09.435574 kubelet[2745]: E1124 06:47:09.435531 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mn54z" podUID="21b695ab-2e0f-4bcd-851e-75e047fb3c73" Nov 24 06:47:11.237607 kubelet[2745]: I1124 06:47:11.237555 2745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 06:47:11.853010 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:52596.service - OpenSSH per-connection server daemon (10.0.0.1:52596). Nov 24 06:47:11.939074 sshd[4935]: Accepted publickey for core from 10.0.0.1 port 52596 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:11.940851 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:11.947368 systemd-logind[1560]: New session 9 of user core. Nov 24 06:47:11.955173 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 06:47:12.187826 sshd[4966]: Connection closed by 10.0.0.1 port 52596 Nov 24 06:47:12.187905 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:12.192274 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:52596.service: Deactivated successfully. Nov 24 06:47:12.194399 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 06:47:12.195534 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Nov 24 06:47:12.198113 systemd-logind[1560]: Removed session 9. Nov 24 06:47:12.479069 systemd-networkd[1478]: vxlan.calico: Link UP Nov 24 06:47:12.481227 systemd-networkd[1478]: vxlan.calico: Gained carrier Nov 24 06:47:14.342021 systemd-networkd[1478]: vxlan.calico: Gained IPv6LL Nov 24 06:47:16.283248 containerd[1580]: time="2025-11-24T06:47:16.283204255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 06:47:16.753271 containerd[1580]: time="2025-11-24T06:47:16.753231455Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:16.841620 containerd[1580]: time="2025-11-24T06:47:16.841552454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 06:47:16.841782 containerd[1580]: time="2025-11-24T06:47:16.841622795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 06:47:16.841940 kubelet[2745]: E1124 06:47:16.841854 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:16.842276 kubelet[2745]: E1124 06:47:16.841942 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:16.842276 kubelet[2745]: E1124 06:47:16.842075 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgvg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cppmt_calico-system(c75bf025-c8e1-47b4-a88c-b817a4677d22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:16.844021 containerd[1580]: time="2025-11-24T06:47:16.843994980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 06:47:17.202768 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:52608.service - OpenSSH per-connection server daemon (10.0.0.1:52608). Nov 24 06:47:17.204421 containerd[1580]: time="2025-11-24T06:47:17.204387180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:17.205703 containerd[1580]: time="2025-11-24T06:47:17.205661983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 06:47:17.205839 containerd[1580]: time="2025-11-24T06:47:17.205704934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 06:47:17.205958 kubelet[2745]: E1124 06:47:17.205908 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:17.206024 kubelet[2745]: E1124 06:47:17.205972 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:17.206194 kubelet[2745]: E1124 06:47:17.206126 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgvg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cppmt_calico-system(c75bf025-c8e1-47b4-a88c-b817a4677d22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:17.207469 kubelet[2745]: E1124 06:47:17.207432 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:47:17.263690 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 52608 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:17.265114 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:17.269313 systemd-logind[1560]: New session 10 of user core. Nov 24 06:47:17.278010 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 06:47:17.283871 containerd[1580]: time="2025-11-24T06:47:17.283738375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 06:47:17.405967 sshd[5078]: Connection closed by 10.0.0.1 port 52608 Nov 24 06:47:17.406294 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:17.410839 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:52608.service: Deactivated successfully. Nov 24 06:47:17.412933 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 06:47:17.413648 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Nov 24 06:47:17.415021 systemd-logind[1560]: Removed session 10. Nov 24 06:47:17.626713 containerd[1580]: time="2025-11-24T06:47:17.626666802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:17.628034 containerd[1580]: time="2025-11-24T06:47:17.627980358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 06:47:17.628173 containerd[1580]: time="2025-11-24T06:47:17.628051222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 06:47:17.628382 kubelet[2745]: E1124 06:47:17.628329 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:17.628382 kubelet[2745]: E1124 06:47:17.628384 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:17.628569 kubelet[2745]: E1124 06:47:17.628531 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:99d2583b01ef4bfea6ba45fb6725d95b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dfhpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d94bc78c9-4zg8l_calico-system(a5bd3df9-185f-4936-8d04-99b968d43986): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:17.630568 containerd[1580]: time="2025-11-24T06:47:17.630542148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 06:47:17.930179 containerd[1580]: time="2025-11-24T06:47:17.930057882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:17.938007 containerd[1580]: time="2025-11-24T06:47:17.937943297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 06:47:17.938094 containerd[1580]: time="2025-11-24T06:47:17.938005744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:17.938238 kubelet[2745]: E1124 06:47:17.938187 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:17.938550 kubelet[2745]: E1124 06:47:17.938244 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:17.938550 kubelet[2745]: E1124 06:47:17.938365 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfhpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d94bc78c9-4zg8l_calico-system(a5bd3df9-185f-4936-8d04-99b968d43986): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:17.939567 kubelet[2745]: E1124 06:47:17.939528 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d94bc78c9-4zg8l" podUID="a5bd3df9-185f-4936-8d04-99b968d43986" Nov 24 06:47:18.282732 containerd[1580]: time="2025-11-24T06:47:18.282696828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:18.699099 containerd[1580]: time="2025-11-24T06:47:18.698969121Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:18.846709 containerd[1580]: time="2025-11-24T06:47:18.846638855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:18.846709 containerd[1580]: time="2025-11-24T06:47:18.846681565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:18.846981 kubelet[2745]: E1124 06:47:18.846868 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:18.847062 kubelet[2745]: E1124 06:47:18.846993 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:18.847336 containerd[1580]: time="2025-11-24T06:47:18.847273116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:18.847573 kubelet[2745]: E1124 06:47:18.847262 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-498d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67b7ddc5fb-4vc4t_calico-apiserver(1302b06d-b9df-431a-8827-afe67da7f5a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:18.848738 kubelet[2745]: E1124 06:47:18.848697 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" podUID="1302b06d-b9df-431a-8827-afe67da7f5a6" Nov 24 06:47:19.161588 containerd[1580]: time="2025-11-24T06:47:19.161525948Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:19.162832 containerd[1580]: time="2025-11-24T06:47:19.162785232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:19.163042 containerd[1580]: time="2025-11-24T06:47:19.162867547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:19.163080 kubelet[2745]: E1124 06:47:19.163014 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:19.163080 kubelet[2745]: E1124 06:47:19.163072 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:19.163471 kubelet[2745]: E1124 06:47:19.163200 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nbx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67b7ddc5fb-jmhdl_calico-apiserver(7a2d00df-2419-4869-83bf-460d83fbab1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:19.164391 kubelet[2745]: E1124 06:47:19.164346 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" podUID="7a2d00df-2419-4869-83bf-460d83fbab1e" Nov 24 06:47:20.282649 containerd[1580]: time="2025-11-24T06:47:20.282380199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 06:47:20.640837 containerd[1580]: time="2025-11-24T06:47:20.640712123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:20.641893 containerd[1580]: time="2025-11-24T06:47:20.641840300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 06:47:20.641958 containerd[1580]: time="2025-11-24T06:47:20.641894903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:20.642057 kubelet[2745]: E1124 06:47:20.642019 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:20.642350 kubelet[2745]: E1124 06:47:20.642061 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:20.642350 kubelet[2745]: E1124 06:47:20.642180 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92bkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b4bdcc677-xn7rm_calico-system(177b7bf1-bf8e-4661-9261-5e6527071df2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:20.643378 kubelet[2745]: E1124 06:47:20.643345 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" podUID="177b7bf1-bf8e-4661-9261-5e6527071df2" Nov 24 06:47:22.282843 containerd[1580]: time="2025-11-24T06:47:22.282805430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 06:47:22.417219 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:47926.service - OpenSSH per-connection server daemon (10.0.0.1:47926). Nov 24 06:47:22.460828 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 47926 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:22.462520 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:22.466640 systemd-logind[1560]: New session 11 of user core. Nov 24 06:47:22.476021 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 06:47:22.599859 containerd[1580]: time="2025-11-24T06:47:22.599607417Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:22.601468 containerd[1580]: time="2025-11-24T06:47:22.601414108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 06:47:22.601591 containerd[1580]: time="2025-11-24T06:47:22.601490431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:22.601682 kubelet[2745]: E1124 06:47:22.601635 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:22.602228 kubelet[2745]: E1124 06:47:22.601695 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:22.602228 kubelet[2745]: E1124 06:47:22.601824 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpt78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mn54z_calico-system(21b695ab-2e0f-4bcd-851e-75e047fb3c73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:22.603999 kubelet[2745]: E1124 06:47:22.603960 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mn54z" podUID="21b695ab-2e0f-4bcd-851e-75e047fb3c73" Nov 24 06:47:22.606467 sshd[5104]: Connection closed by 10.0.0.1 port 47926 Nov 24 06:47:22.606869 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:22.616821 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:47926.service: Deactivated successfully. Nov 24 06:47:22.619307 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 06:47:22.620393 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Nov 24 06:47:22.624837 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:47934.service - OpenSSH per-connection server daemon (10.0.0.1:47934). Nov 24 06:47:22.627629 systemd-logind[1560]: Removed session 11. Nov 24 06:47:22.674381 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 47934 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:22.676308 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:22.682491 systemd-logind[1560]: New session 12 of user core. Nov 24 06:47:22.688063 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 06:47:22.836209 sshd[5122]: Connection closed by 10.0.0.1 port 47934 Nov 24 06:47:22.836721 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:22.849504 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:47934.service: Deactivated successfully. Nov 24 06:47:22.853952 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 06:47:22.855656 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Nov 24 06:47:22.858320 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:47950.service - OpenSSH per-connection server daemon (10.0.0.1:47950). Nov 24 06:47:22.859080 systemd-logind[1560]: Removed session 12. Nov 24 06:47:23.100533 sshd[5134]: Accepted publickey for core from 10.0.0.1 port 47950 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:23.102190 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:23.106777 systemd-logind[1560]: New session 13 of user core. Nov 24 06:47:23.124023 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 06:47:23.248668 sshd[5145]: Connection closed by 10.0.0.1 port 47950 Nov 24 06:47:23.249013 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:23.253287 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:47950.service: Deactivated successfully. Nov 24 06:47:23.255301 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 06:47:23.256228 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Nov 24 06:47:23.257298 systemd-logind[1560]: Removed session 13. Nov 24 06:47:28.263897 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:47952.service - OpenSSH per-connection server daemon (10.0.0.1:47952). Nov 24 06:47:28.283375 kubelet[2745]: E1124 06:47:28.283280 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:47:28.308753 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 47952 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:28.310580 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:28.315047 systemd-logind[1560]: New session 14 of user core. Nov 24 06:47:28.326013 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 06:47:28.442076 sshd[5168]: Connection closed by 10.0.0.1 port 47952 Nov 24 06:47:28.442468 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:28.447609 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:47952.service: Deactivated successfully. Nov 24 06:47:28.449887 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 06:47:28.450762 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Nov 24 06:47:28.452170 systemd-logind[1560]: Removed session 14. Nov 24 06:47:30.282824 kubelet[2745]: E1124 06:47:30.282773 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" podUID="7a2d00df-2419-4869-83bf-460d83fbab1e" Nov 24 06:47:31.282729 kubelet[2745]: E1124 06:47:31.282652 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" podUID="1302b06d-b9df-431a-8827-afe67da7f5a6" Nov 24 06:47:31.283163 kubelet[2745]: E1124 06:47:31.283120 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d94bc78c9-4zg8l" podUID="a5bd3df9-185f-4936-8d04-99b968d43986" Nov 24 06:47:33.462903 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:52552.service - OpenSSH per-connection server daemon (10.0.0.1:52552). Nov 24 06:47:33.530554 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 52552 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:33.532067 sshd-session[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:33.536407 systemd-logind[1560]: New session 15 of user core. Nov 24 06:47:33.545998 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 06:47:33.674364 sshd[5245]: Connection closed by 10.0.0.1 port 52552 Nov 24 06:47:33.674694 sshd-session[5242]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:33.679700 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:52552.service: Deactivated successfully. Nov 24 06:47:33.682094 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 06:47:33.682949 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Nov 24 06:47:33.684571 systemd-logind[1560]: Removed session 15. Nov 24 06:47:34.282188 kubelet[2745]: E1124 06:47:34.282131 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" podUID="177b7bf1-bf8e-4661-9261-5e6527071df2" Nov 24 06:47:35.282075 kubelet[2745]: E1124 06:47:35.282021 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mn54z" podUID="21b695ab-2e0f-4bcd-851e-75e047fb3c73" Nov 24 06:47:38.694634 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:52554.service - OpenSSH per-connection server daemon (10.0.0.1:52554). Nov 24 06:47:38.748324 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 52554 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:38.749964 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:38.754659 systemd-logind[1560]: New session 16 of user core. Nov 24 06:47:38.768021 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 06:47:38.883195 sshd[5261]: Connection closed by 10.0.0.1 port 52554 Nov 24 06:47:38.883514 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:38.888019 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:52554.service: Deactivated successfully. Nov 24 06:47:38.890072 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 06:47:38.890771 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Nov 24 06:47:38.891972 systemd-logind[1560]: Removed session 16. Nov 24 06:47:42.282847 containerd[1580]: time="2025-11-24T06:47:42.282785744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 06:47:42.591145 containerd[1580]: time="2025-11-24T06:47:42.591005647Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:42.592583 containerd[1580]: time="2025-11-24T06:47:42.592505196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 06:47:42.592583 containerd[1580]: time="2025-11-24T06:47:42.592555744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 06:47:42.592831 kubelet[2745]: E1124 06:47:42.592765 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:42.592831 kubelet[2745]: E1124 06:47:42.592826 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:42.593306 kubelet[2745]: E1124 06:47:42.593146 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:99d2583b01ef4bfea6ba45fb6725d95b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dfhpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d94bc78c9-4zg8l_calico-system(a5bd3df9-185f-4936-8d04-99b968d43986): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:42.593562 containerd[1580]: time="2025-11-24T06:47:42.593513989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 06:47:42.926866 containerd[1580]: time="2025-11-24T06:47:42.926706456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:42.928236 containerd[1580]: time="2025-11-24T06:47:42.928185836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 06:47:42.928387 containerd[1580]: time="2025-11-24T06:47:42.928268374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 06:47:42.928488 kubelet[2745]: E1124 06:47:42.928439 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:42.928545 kubelet[2745]: E1124 06:47:42.928500 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:42.928842 containerd[1580]: time="2025-11-24T06:47:42.928792905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 06:47:42.928936 kubelet[2745]: E1124 06:47:42.928794 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgvg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cppmt_calico-system(c75bf025-c8e1-47b4-a88c-b817a4677d22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:43.300364 containerd[1580]: time="2025-11-24T06:47:43.300323268Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:43.301603 containerd[1580]: time="2025-11-24T06:47:43.301561362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 06:47:43.301603 containerd[1580]: time="2025-11-24T06:47:43.301593043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:43.301807 kubelet[2745]: E1124 06:47:43.301722 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:43.301807 kubelet[2745]: E1124 06:47:43.301763 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:43.302143 kubelet[2745]: E1124 06:47:43.301993 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfhpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7d94bc78c9-4zg8l_calico-system(a5bd3df9-185f-4936-8d04-99b968d43986): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:43.302433 containerd[1580]: time="2025-11-24T06:47:43.302396810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 06:47:43.303290 kubelet[2745]: E1124 06:47:43.303243 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d94bc78c9-4zg8l" podUID="a5bd3df9-185f-4936-8d04-99b968d43986" Nov 24 06:47:43.647047 containerd[1580]: time="2025-11-24T06:47:43.646914739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:43.648473 containerd[1580]: time="2025-11-24T06:47:43.648409787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 06:47:43.648518 containerd[1580]: time="2025-11-24T06:47:43.648487046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 06:47:43.648684 kubelet[2745]: E1124 06:47:43.648641 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:43.649014 kubelet[2745]: E1124 06:47:43.648691 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:43.649014 kubelet[2745]: E1124 06:47:43.648807 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgvg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cppmt_calico-system(c75bf025-c8e1-47b4-a88c-b817a4677d22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:43.650020 kubelet[2745]: E1124 06:47:43.649980 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:47:43.895482 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:39658.service - OpenSSH per-connection server daemon (10.0.0.1:39658). Nov 24 06:47:43.941749 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 39658 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:43.942953 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:43.946976 systemd-logind[1560]: New session 17 of user core. Nov 24 06:47:43.958005 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 06:47:44.066481 sshd[5277]: Connection closed by 10.0.0.1 port 39658 Nov 24 06:47:44.066787 sshd-session[5274]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:44.075543 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:39658.service: Deactivated successfully. Nov 24 06:47:44.077691 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 06:47:44.078652 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Nov 24 06:47:44.081576 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:39668.service - OpenSSH per-connection server daemon (10.0.0.1:39668). Nov 24 06:47:44.082374 systemd-logind[1560]: Removed session 17. Nov 24 06:47:44.129020 sshd[5291]: Accepted publickey for core from 10.0.0.1 port 39668 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:44.130559 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:44.134985 systemd-logind[1560]: New session 18 of user core. Nov 24 06:47:44.141126 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 06:47:44.283072 containerd[1580]: time="2025-11-24T06:47:44.283032192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:44.356904 sshd[5295]: Connection closed by 10.0.0.1 port 39668 Nov 24 06:47:44.357590 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:44.366425 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:39668.service: Deactivated successfully. Nov 24 06:47:44.369733 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 06:47:44.371411 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Nov 24 06:47:44.376185 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:39682.service - OpenSSH per-connection server daemon (10.0.0.1:39682). Nov 24 06:47:44.378410 systemd-logind[1560]: Removed session 18. Nov 24 06:47:44.445493 sshd[5308]: Accepted publickey for core from 10.0.0.1 port 39682 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:44.446095 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:44.456272 systemd-logind[1560]: New session 19 of user core. Nov 24 06:47:44.461031 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 06:47:44.612838 containerd[1580]: time="2025-11-24T06:47:44.612462161Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:44.641280 containerd[1580]: time="2025-11-24T06:47:44.641219226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:44.641280 containerd[1580]: time="2025-11-24T06:47:44.641270143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:44.641707 kubelet[2745]: E1124 06:47:44.641456 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:44.641707 kubelet[2745]: E1124 06:47:44.641510 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:44.641707 kubelet[2745]: E1124 06:47:44.641655 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nbx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67b7ddc5fb-jmhdl_calico-apiserver(7a2d00df-2419-4869-83bf-460d83fbab1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:44.642870 kubelet[2745]: E1124 06:47:44.642822 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" podUID="7a2d00df-2419-4869-83bf-460d83fbab1e" Nov 24 06:47:45.109732 sshd[5311]: Connection closed by 10.0.0.1 port 39682 Nov 24 06:47:45.110082 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:45.121540 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:39682.service: Deactivated successfully. Nov 24 06:47:45.124074 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 06:47:45.127141 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Nov 24 06:47:45.131151 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:39690.service - OpenSSH per-connection server daemon (10.0.0.1:39690). Nov 24 06:47:45.134126 systemd-logind[1560]: Removed session 19. Nov 24 06:47:45.176466 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 39690 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:45.178105 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:45.184172 systemd-logind[1560]: New session 20 of user core. Nov 24 06:47:45.188025 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 06:47:45.395423 sshd[5334]: Connection closed by 10.0.0.1 port 39690 Nov 24 06:47:45.396008 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:45.404975 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:39690.service: Deactivated successfully. Nov 24 06:47:45.407120 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 06:47:45.408090 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Nov 24 06:47:45.410907 systemd[1]: Started sshd@20-10.0.0.28:22-10.0.0.1:39694.service - OpenSSH per-connection server daemon (10.0.0.1:39694). Nov 24 06:47:45.411626 systemd-logind[1560]: Removed session 20. Nov 24 06:47:45.465371 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 39694 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:45.466769 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:45.471591 systemd-logind[1560]: New session 21 of user core. Nov 24 06:47:45.479008 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 06:47:45.599175 sshd[5350]: Connection closed by 10.0.0.1 port 39694 Nov 24 06:47:45.599468 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:45.603904 systemd[1]: sshd@20-10.0.0.28:22-10.0.0.1:39694.service: Deactivated successfully. Nov 24 06:47:45.605762 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 06:47:45.606470 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Nov 24 06:47:45.607518 systemd-logind[1560]: Removed session 21. Nov 24 06:47:46.283182 containerd[1580]: time="2025-11-24T06:47:46.283137564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:46.577521 containerd[1580]: time="2025-11-24T06:47:46.577405792Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:46.620670 containerd[1580]: time="2025-11-24T06:47:46.620619936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:46.620822 containerd[1580]: time="2025-11-24T06:47:46.620663801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:46.620896 kubelet[2745]: E1124 06:47:46.620838 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:46.621216 kubelet[2745]: E1124 06:47:46.620901 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:46.621216 kubelet[2745]: E1124 06:47:46.621049 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-498d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67b7ddc5fb-4vc4t_calico-apiserver(1302b06d-b9df-431a-8827-afe67da7f5a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:46.622321 kubelet[2745]: E1124 06:47:46.622284 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" podUID="1302b06d-b9df-431a-8827-afe67da7f5a6" Nov 24 06:47:48.282227 containerd[1580]: time="2025-11-24T06:47:48.282186589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 06:47:48.629632 containerd[1580]: time="2025-11-24T06:47:48.629496323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:48.630834 containerd[1580]: time="2025-11-24T06:47:48.630775186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 06:47:48.630917 containerd[1580]: time="2025-11-24T06:47:48.630846373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:48.631034 kubelet[2745]: E1124 06:47:48.630974 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:48.631378 kubelet[2745]: E1124 06:47:48.631036 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:48.631378 kubelet[2745]: E1124 06:47:48.631198 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpt78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mn54z_calico-system(21b695ab-2e0f-4bcd-851e-75e047fb3c73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:48.632406 kubelet[2745]: E1124 06:47:48.632374 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mn54z" podUID="21b695ab-2e0f-4bcd-851e-75e047fb3c73" Nov 24 06:47:49.283267 containerd[1580]: time="2025-11-24T06:47:49.282995175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 06:47:49.637008 containerd[1580]: time="2025-11-24T06:47:49.636864743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:49.638835 containerd[1580]: time="2025-11-24T06:47:49.638746212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 06:47:49.638928 containerd[1580]: time="2025-11-24T06:47:49.638761911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:49.639100 kubelet[2745]: E1124 06:47:49.639051 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:49.639452 kubelet[2745]: E1124 06:47:49.639109 2745 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:49.639452 kubelet[2745]: E1124 06:47:49.639222 2745 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92bkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b4bdcc677-xn7rm_calico-system(177b7bf1-bf8e-4661-9261-5e6527071df2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:49.640375 kubelet[2745]: E1124 06:47:49.640351 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" podUID="177b7bf1-bf8e-4661-9261-5e6527071df2" Nov 24 06:47:50.615245 systemd[1]: Started sshd@21-10.0.0.28:22-10.0.0.1:33670.service - OpenSSH per-connection server daemon (10.0.0.1:33670). Nov 24 06:47:50.666151 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 33670 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:50.667332 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:50.671065 systemd-logind[1560]: New session 22 of user core. Nov 24 06:47:50.679002 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 06:47:50.783686 sshd[5370]: Connection closed by 10.0.0.1 port 33670 Nov 24 06:47:50.784014 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:50.787609 systemd[1]: sshd@21-10.0.0.28:22-10.0.0.1:33670.service: Deactivated successfully. Nov 24 06:47:50.789492 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 06:47:50.790312 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Nov 24 06:47:50.791631 systemd-logind[1560]: Removed session 22. Nov 24 06:47:55.797550 systemd[1]: Started sshd@22-10.0.0.28:22-10.0.0.1:33686.service - OpenSSH per-connection server daemon (10.0.0.1:33686). Nov 24 06:47:55.846146 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 33686 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:47:55.847453 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:55.851302 systemd-logind[1560]: New session 23 of user core. Nov 24 06:47:55.862000 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 06:47:55.961985 sshd[5394]: Connection closed by 10.0.0.1 port 33686 Nov 24 06:47:55.962317 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:55.966242 systemd[1]: sshd@22-10.0.0.28:22-10.0.0.1:33686.service: Deactivated successfully. Nov 24 06:47:55.968191 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 06:47:55.968969 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Nov 24 06:47:55.970266 systemd-logind[1560]: Removed session 23. Nov 24 06:47:56.283332 kubelet[2745]: E1124 06:47:56.283273 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d94bc78c9-4zg8l" podUID="a5bd3df9-185f-4936-8d04-99b968d43986" Nov 24 06:47:56.283990 kubelet[2745]: E1124 06:47:56.283730 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cppmt" podUID="c75bf025-c8e1-47b4-a88c-b817a4677d22" Nov 24 06:47:58.282513 kubelet[2745]: E1124 06:47:58.282435 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-jmhdl" podUID="7a2d00df-2419-4869-83bf-460d83fbab1e" Nov 24 06:47:59.282574 kubelet[2745]: E1124 06:47:59.281997 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mn54z" podUID="21b695ab-2e0f-4bcd-851e-75e047fb3c73" Nov 24 06:48:00.976722 systemd[1]: Started sshd@23-10.0.0.28:22-10.0.0.1:59628.service - OpenSSH per-connection server daemon (10.0.0.1:59628). Nov 24 06:48:01.037508 sshd[5434]: Accepted publickey for core from 10.0.0.1 port 59628 ssh2: RSA SHA256:Sf0YHjxtsdVO/uubGACjTK34hLK2zLZsCrSD2NZWg/o Nov 24 06:48:01.038907 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:48:01.043156 systemd-logind[1560]: New session 24 of user core. Nov 24 06:48:01.050007 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 24 06:48:01.163279 sshd[5437]: Connection closed by 10.0.0.1 port 59628 Nov 24 06:48:01.163608 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Nov 24 06:48:01.168056 systemd[1]: sshd@23-10.0.0.28:22-10.0.0.1:59628.service: Deactivated successfully. Nov 24 06:48:01.170143 systemd[1]: session-24.scope: Deactivated successfully. Nov 24 06:48:01.171047 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Nov 24 06:48:01.172514 systemd-logind[1560]: Removed session 24. Nov 24 06:48:01.300898 kubelet[2745]: E1124 06:48:01.299750 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7ddc5fb-4vc4t" podUID="1302b06d-b9df-431a-8827-afe67da7f5a6" Nov 24 06:48:02.283104 kubelet[2745]: E1124 06:48:02.283023 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b4bdcc677-xn7rm" podUID="177b7bf1-bf8e-4661-9261-5e6527071df2"