Oct 27 08:27:50.716045 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 27 06:24:35 -00 2025 Oct 27 08:27:50.716080 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:27:50.716093 kernel: BIOS-provided physical RAM map: Oct 27 08:27:50.716102 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 27 08:27:50.716110 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 27 08:27:50.716122 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Oct 27 08:27:50.716133 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 27 08:27:50.716143 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Oct 27 08:27:50.716157 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 27 08:27:50.716166 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 27 08:27:50.716175 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 27 08:27:50.716184 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 27 08:27:50.716194 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 27 08:27:50.716207 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 27 08:27:50.716218 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 27 08:27:50.716228 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 27 08:27:50.716241 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 27 08:27:50.716254 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 27 08:27:50.716264 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 27 08:27:50.716274 kernel: NX (Execute Disable) protection: active Oct 27 08:27:50.716284 kernel: APIC: Static calls initialized Oct 27 08:27:50.716294 kernel: e820: update [mem 0x9a13d018-0x9a146c57] usable ==> usable Oct 27 08:27:50.716304 kernel: e820: update [mem 0x9a100018-0x9a13ce57] usable ==> usable Oct 27 08:27:50.716314 kernel: extended physical RAM map: Oct 27 08:27:50.716324 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 27 08:27:50.716345 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 27 08:27:50.716355 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Oct 27 08:27:50.716365 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 27 08:27:50.716379 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a100017] usable Oct 27 08:27:50.716388 kernel: reserve setup_data: [mem 0x000000009a100018-0x000000009a13ce57] usable Oct 27 08:27:50.716398 kernel: reserve setup_data: [mem 0x000000009a13ce58-0x000000009a13d017] usable Oct 27 08:27:50.716408 kernel: reserve setup_data: [mem 0x000000009a13d018-0x000000009a146c57] usable Oct 27 08:27:50.716417 kernel: reserve setup_data: [mem 0x000000009a146c58-0x000000009b8ecfff] usable Oct 27 08:27:50.716427 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 27 08:27:50.716436 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 27 08:27:50.716446 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 27 08:27:50.716456 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 27 08:27:50.716465 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 27 08:27:50.716478 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 27 08:27:50.716488 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 27 08:27:50.716512 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 27 08:27:50.716537 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 27 08:27:50.716550 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 27 08:27:50.716582 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 27 08:27:50.716612 kernel: efi: EFI v2.7 by EDK II Oct 27 08:27:50.716649 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Oct 27 08:27:50.716671 kernel: random: crng init done Oct 27 08:27:50.716682 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Oct 27 08:27:50.716692 kernel: secureboot: Secure boot enabled Oct 27 08:27:50.716711 kernel: SMBIOS 2.8 present. Oct 27 08:27:50.716740 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 27 08:27:50.716753 kernel: DMI: Memory slots populated: 1/1 Oct 27 08:27:50.716768 kernel: Hypervisor detected: KVM Oct 27 08:27:50.716779 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 27 08:27:50.716789 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 27 08:27:50.716800 kernel: kvm-clock: using sched offset of 5817546351 cycles Oct 27 08:27:50.716811 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 27 08:27:50.716822 kernel: tsc: Detected 2794.750 MHz processor Oct 27 08:27:50.716833 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 27 08:27:50.716845 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 27 08:27:50.716871 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 27 08:27:50.716895 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 27 08:27:50.716910 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 27 08:27:50.716923 kernel: Using GB pages for direct mapping Oct 27 08:27:50.716950 kernel: ACPI: Early table checksum verification disabled Oct 27 08:27:50.716963 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Oct 27 08:27:50.716974 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 27 08:27:50.716986 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:27:50.717001 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:27:50.717012 kernel: ACPI: FACS 0x000000009BBDD000 000040 Oct 27 08:27:50.717037 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:27:50.717065 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:27:50.717077 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:27:50.717088 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:27:50.717100 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 27 08:27:50.717115 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Oct 27 08:27:50.717126 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Oct 27 08:27:50.717137 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Oct 27 08:27:50.717148 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Oct 27 08:27:50.717158 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Oct 27 08:27:50.717169 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Oct 27 08:27:50.717180 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Oct 27 08:27:50.717190 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Oct 27 08:27:50.717204 kernel: No NUMA configuration found Oct 27 08:27:50.717215 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Oct 27 08:27:50.717225 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Oct 27 08:27:50.717236 kernel: Zone ranges: Oct 27 08:27:50.717247 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 27 08:27:50.717258 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Oct 27 08:27:50.717268 kernel: Normal empty Oct 27 08:27:50.717282 kernel: Device empty Oct 27 08:27:50.717293 kernel: Movable zone start for each node Oct 27 08:27:50.717318 kernel: Early memory node ranges Oct 27 08:27:50.717333 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Oct 27 08:27:50.717349 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Oct 27 08:27:50.717360 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Oct 27 08:27:50.717371 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Oct 27 08:27:50.717381 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Oct 27 08:27:50.717396 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Oct 27 08:27:50.717407 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 27 08:27:50.717417 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Oct 27 08:27:50.717428 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 27 08:27:50.717439 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 27 08:27:50.717450 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 27 08:27:50.717460 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Oct 27 08:27:50.717474 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 27 08:27:50.717486 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 27 08:27:50.717497 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 27 08:27:50.717520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 27 08:27:50.717556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 27 08:27:50.717568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 27 08:27:50.717578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 27 08:27:50.717593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 27 08:27:50.717605 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 27 08:27:50.717615 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 27 08:27:50.717626 kernel: TSC deadline timer available Oct 27 08:27:50.717637 kernel: CPU topo: Max. logical packages: 1 Oct 27 08:27:50.717648 kernel: CPU topo: Max. logical dies: 1 Oct 27 08:27:50.717669 kernel: CPU topo: Max. dies per package: 1 Oct 27 08:27:50.717680 kernel: CPU topo: Max. threads per core: 1 Oct 27 08:27:50.717704 kernel: CPU topo: Num. cores per package: 4 Oct 27 08:27:50.717715 kernel: CPU topo: Num. threads per package: 4 Oct 27 08:27:50.717731 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 27 08:27:50.717743 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 27 08:27:50.717754 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 27 08:27:50.717768 kernel: kvm-guest: setup PV sched yield Oct 27 08:27:50.717782 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 27 08:27:50.717793 kernel: Booting paravirtualized kernel on KVM Oct 27 08:27:50.717804 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 27 08:27:50.717815 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 27 08:27:50.717827 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 27 08:27:50.717838 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 27 08:27:50.717849 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 27 08:27:50.717863 kernel: kvm-guest: PV spinlocks enabled Oct 27 08:27:50.717874 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 27 08:27:50.717887 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:27:50.717898 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 27 08:27:50.717909 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 08:27:50.717920 kernel: Fallback order for Node 0: 0 Oct 27 08:27:50.717954 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Oct 27 08:27:50.717966 kernel: Policy zone: DMA32 Oct 27 08:27:50.717977 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 08:27:50.717988 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 27 08:27:50.717999 kernel: ftrace: allocating 40092 entries in 157 pages Oct 27 08:27:50.718010 kernel: ftrace: allocated 157 pages with 5 groups Oct 27 08:27:50.718021 kernel: Dynamic Preempt: voluntary Oct 27 08:27:50.718031 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 08:27:50.718047 kernel: rcu: RCU event tracing is enabled. Oct 27 08:27:50.718058 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 27 08:27:50.718072 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 08:27:50.718083 kernel: Rude variant of Tasks RCU enabled. Oct 27 08:27:50.718094 kernel: Tracing variant of Tasks RCU enabled. Oct 27 08:27:50.718105 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 08:27:50.718116 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 27 08:27:50.718130 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:27:50.718141 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:27:50.718156 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:27:50.718167 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 27 08:27:50.718178 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 08:27:50.718189 kernel: Console: colour dummy device 80x25 Oct 27 08:27:50.718200 kernel: printk: legacy console [ttyS0] enabled Oct 27 08:27:50.718218 kernel: ACPI: Core revision 20240827 Oct 27 08:27:50.718230 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 27 08:27:50.718241 kernel: APIC: Switch to symmetric I/O mode setup Oct 27 08:27:50.718252 kernel: x2apic enabled Oct 27 08:27:50.718263 kernel: APIC: Switched APIC routing to: physical x2apic Oct 27 08:27:50.718278 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 27 08:27:50.718290 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 27 08:27:50.718305 kernel: kvm-guest: setup PV IPIs Oct 27 08:27:50.718316 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 27 08:27:50.718327 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 27 08:27:50.718360 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 27 08:27:50.718392 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 27 08:27:50.718428 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 27 08:27:50.718450 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 27 08:27:50.718469 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 27 08:27:50.718493 kernel: Spectre V2 : Mitigation: Retpolines Oct 27 08:27:50.718506 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 27 08:27:50.718524 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 27 08:27:50.718536 kernel: active return thunk: retbleed_return_thunk Oct 27 08:27:50.718560 kernel: RETBleed: Mitigation: untrained return thunk Oct 27 08:27:50.718589 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 27 08:27:50.718629 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 27 08:27:50.718651 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 27 08:27:50.718671 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 27 08:27:50.718683 kernel: active return thunk: srso_return_thunk Oct 27 08:27:50.718703 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 27 08:27:50.718715 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 27 08:27:50.718726 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 27 08:27:50.718752 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 27 08:27:50.718764 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 27 08:27:50.718774 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 27 08:27:50.718782 kernel: Freeing SMP alternatives memory: 32K Oct 27 08:27:50.718791 kernel: pid_max: default: 32768 minimum: 301 Oct 27 08:27:50.718799 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 27 08:27:50.718808 kernel: landlock: Up and running. Oct 27 08:27:50.718819 kernel: SELinux: Initializing. Oct 27 08:27:50.718836 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 08:27:50.718846 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 08:27:50.718854 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 27 08:27:50.718863 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 27 08:27:50.718871 kernel: ... version: 0 Oct 27 08:27:50.718883 kernel: ... bit width: 48 Oct 27 08:27:50.718902 kernel: ... generic registers: 6 Oct 27 08:27:50.718911 kernel: ... value mask: 0000ffffffffffff Oct 27 08:27:50.718920 kernel: ... max period: 00007fffffffffff Oct 27 08:27:50.718928 kernel: ... fixed-purpose events: 0 Oct 27 08:27:50.718952 kernel: ... event mask: 000000000000003f Oct 27 08:27:50.718961 kernel: signal: max sigframe size: 1776 Oct 27 08:27:50.718969 kernel: rcu: Hierarchical SRCU implementation. Oct 27 08:27:50.718982 kernel: rcu: Max phase no-delay instances is 400. Oct 27 08:27:50.718990 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 27 08:27:50.718999 kernel: smp: Bringing up secondary CPUs ... Oct 27 08:27:50.719007 kernel: smpboot: x86: Booting SMP configuration: Oct 27 08:27:50.719019 kernel: .... node #0, CPUs: #1 #2 #3 Oct 27 08:27:50.719027 kernel: smp: Brought up 1 node, 4 CPUs Oct 27 08:27:50.719036 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 27 08:27:50.719045 kernel: Memory: 2431744K/2552216K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114536K reserved, 0K cma-reserved) Oct 27 08:27:50.719056 kernel: devtmpfs: initialized Oct 27 08:27:50.719064 kernel: x86/mm: Memory block size: 128MB Oct 27 08:27:50.719073 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Oct 27 08:27:50.719081 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Oct 27 08:27:50.719090 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 08:27:50.719099 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 27 08:27:50.719109 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 08:27:50.719118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 08:27:50.719126 kernel: audit: initializing netlink subsys (disabled) Oct 27 08:27:50.719135 kernel: audit: type=2000 audit(1761553667.192:1): state=initialized audit_enabled=0 res=1 Oct 27 08:27:50.719143 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 08:27:50.719152 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 27 08:27:50.719160 kernel: cpuidle: using governor menu Oct 27 08:27:50.719169 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 08:27:50.719179 kernel: dca service started, version 1.12.1 Oct 27 08:27:50.719188 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 27 08:27:50.719196 kernel: PCI: Using configuration type 1 for base access Oct 27 08:27:50.719205 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 27 08:27:50.719214 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 08:27:50.719222 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 08:27:50.719230 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 08:27:50.719241 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 08:27:50.719250 kernel: ACPI: Added _OSI(Module Device) Oct 27 08:27:50.719258 kernel: ACPI: Added _OSI(Processor Device) Oct 27 08:27:50.719266 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 08:27:50.719275 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 08:27:50.719284 kernel: ACPI: Interpreter enabled Oct 27 08:27:50.719292 kernel: ACPI: PM: (supports S0 S5) Oct 27 08:27:50.719303 kernel: ACPI: Using IOAPIC for interrupt routing Oct 27 08:27:50.719311 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 27 08:27:50.719320 kernel: PCI: Using E820 reservations for host bridge windows Oct 27 08:27:50.719328 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 27 08:27:50.719337 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 08:27:50.719615 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 08:27:50.719879 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 27 08:27:50.720108 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 27 08:27:50.720209 kernel: PCI host bridge to bus 0000:00 Oct 27 08:27:50.720415 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 27 08:27:50.720618 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 27 08:27:50.720831 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 27 08:27:50.721073 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 27 08:27:50.721379 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 27 08:27:50.721565 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 27 08:27:50.721739 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 08:27:50.722024 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 27 08:27:50.722623 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 27 08:27:50.722810 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 27 08:27:50.723083 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 27 08:27:50.723378 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 27 08:27:50.723565 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 27 08:27:50.723805 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 27 08:27:50.724141 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 27 08:27:50.724397 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 27 08:27:50.724585 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 27 08:27:50.724888 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 27 08:27:50.725185 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 27 08:27:50.725610 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 27 08:27:50.725825 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 27 08:27:50.727914 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 27 08:27:50.728178 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 27 08:27:50.728388 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 27 08:27:50.728617 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 27 08:27:50.728843 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 27 08:27:50.729218 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 27 08:27:50.729581 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 27 08:27:50.730142 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 27 08:27:50.730333 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 27 08:27:50.730544 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 27 08:27:50.730774 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 27 08:27:50.730984 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 27 08:27:50.730999 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 27 08:27:50.731008 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 27 08:27:50.731017 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 27 08:27:50.731026 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 27 08:27:50.731040 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 27 08:27:50.731049 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 27 08:27:50.731058 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 27 08:27:50.731076 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 27 08:27:50.731085 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 27 08:27:50.731109 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 27 08:27:50.731127 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 27 08:27:50.731455 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 27 08:27:50.731483 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 27 08:27:50.731496 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 27 08:27:50.731513 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 27 08:27:50.731531 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 27 08:27:50.731556 kernel: iommu: Default domain type: Translated Oct 27 08:27:50.731571 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 27 08:27:50.731595 kernel: efivars: Registered efivars operations Oct 27 08:27:50.731604 kernel: PCI: Using ACPI for IRQ routing Oct 27 08:27:50.731613 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 27 08:27:50.731623 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Oct 27 08:27:50.731637 kernel: e820: reserve RAM buffer [mem 0x9a100018-0x9bffffff] Oct 27 08:27:50.731646 kernel: e820: reserve RAM buffer [mem 0x9a13d018-0x9bffffff] Oct 27 08:27:50.731665 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Oct 27 08:27:50.731686 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Oct 27 08:27:50.731911 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 27 08:27:50.732142 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 27 08:27:50.732320 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 27 08:27:50.732331 kernel: vgaarb: loaded Oct 27 08:27:50.732340 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 27 08:27:50.732349 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 27 08:27:50.732367 kernel: clocksource: Switched to clocksource kvm-clock Oct 27 08:27:50.732376 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 08:27:50.732386 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 08:27:50.732395 kernel: pnp: PnP ACPI init Oct 27 08:27:50.732587 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 27 08:27:50.732601 kernel: pnp: PnP ACPI: found 6 devices Oct 27 08:27:50.732610 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 27 08:27:50.732627 kernel: NET: Registered PF_INET protocol family Oct 27 08:27:50.732636 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 27 08:27:50.732645 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 27 08:27:50.732655 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 08:27:50.732664 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 08:27:50.732673 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 27 08:27:50.732682 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 27 08:27:50.732704 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 08:27:50.732713 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 08:27:50.732723 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 08:27:50.732732 kernel: NET: Registered PF_XDP protocol family Oct 27 08:27:50.732910 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 27 08:27:50.733105 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 27 08:27:50.733300 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 27 08:27:50.733484 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 27 08:27:50.733647 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 27 08:27:50.733821 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 27 08:27:50.733997 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 27 08:27:50.734197 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 27 08:27:50.734211 kernel: PCI: CLS 0 bytes, default 64 Oct 27 08:27:50.734229 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 27 08:27:50.734238 kernel: Initialise system trusted keyrings Oct 27 08:27:50.734247 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 27 08:27:50.734256 kernel: Key type asymmetric registered Oct 27 08:27:50.734264 kernel: Asymmetric key parser 'x509' registered Oct 27 08:27:50.734304 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 27 08:27:50.734315 kernel: io scheduler mq-deadline registered Oct 27 08:27:50.734326 kernel: io scheduler kyber registered Oct 27 08:27:50.734335 kernel: io scheduler bfq registered Oct 27 08:27:50.734345 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 27 08:27:50.734355 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 27 08:27:50.734364 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 27 08:27:50.734373 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 27 08:27:50.734382 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 08:27:50.734396 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 27 08:27:50.734405 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 27 08:27:50.734415 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 27 08:27:50.734424 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 27 08:27:50.734626 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 27 08:27:50.734642 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 27 08:27:50.734827 kernel: rtc_cmos 00:04: registered as rtc0 Oct 27 08:27:50.735048 kernel: rtc_cmos 00:04: setting system clock to 2025-10-27T08:27:48 UTC (1761553668) Oct 27 08:27:50.735263 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 27 08:27:50.735284 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 27 08:27:50.735294 kernel: efifb: probing for efifb Oct 27 08:27:50.735304 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 27 08:27:50.735313 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 27 08:27:50.735326 kernel: efifb: scrolling: redraw Oct 27 08:27:50.735336 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 27 08:27:50.735345 kernel: Console: switching to colour frame buffer device 160x50 Oct 27 08:27:50.735356 kernel: fb0: EFI VGA frame buffer device Oct 27 08:27:50.735365 kernel: pstore: Using crash dump compression: deflate Oct 27 08:27:50.735376 kernel: pstore: Registered efi_pstore as persistent store backend Oct 27 08:27:50.735391 kernel: NET: Registered PF_INET6 protocol family Oct 27 08:27:50.735400 kernel: Segment Routing with IPv6 Oct 27 08:27:50.735409 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 08:27:50.735418 kernel: NET: Registered PF_PACKET protocol family Oct 27 08:27:50.735427 kernel: Key type dns_resolver registered Oct 27 08:27:50.735439 kernel: IPI shorthand broadcast: enabled Oct 27 08:27:50.735451 kernel: sched_clock: Marking stable (1308003345, 261009333)->(1726360495, -157347817) Oct 27 08:27:50.735459 kernel: registered taskstats version 1 Oct 27 08:27:50.735469 kernel: Loading compiled-in X.509 certificates Oct 27 08:27:50.735478 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 6c7ef547b8d769f7afd2708799fb9c3145695bfb' Oct 27 08:27:50.735487 kernel: Demotion targets for Node 0: null Oct 27 08:27:50.735497 kernel: Key type .fscrypt registered Oct 27 08:27:50.735505 kernel: Key type fscrypt-provisioning registered Oct 27 08:27:50.735517 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 08:27:50.735526 kernel: ima: Allocated hash algorithm: sha1 Oct 27 08:27:50.735535 kernel: ima: No architecture policies found Oct 27 08:27:50.735543 kernel: clk: Disabling unused clocks Oct 27 08:27:50.735552 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 27 08:27:50.735561 kernel: Write protecting the kernel read-only data: 40960k Oct 27 08:27:50.735570 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 27 08:27:50.735582 kernel: Run /init as init process Oct 27 08:27:50.735591 kernel: with arguments: Oct 27 08:27:50.735600 kernel: /init Oct 27 08:27:50.735609 kernel: with environment: Oct 27 08:27:50.735618 kernel: HOME=/ Oct 27 08:27:50.735627 kernel: TERM=linux Oct 27 08:27:50.735635 kernel: SCSI subsystem initialized Oct 27 08:27:50.735646 kernel: libata version 3.00 loaded. Oct 27 08:27:50.736036 kernel: ahci 0000:00:1f.2: version 3.0 Oct 27 08:27:50.736052 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 27 08:27:50.737291 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 27 08:27:50.737477 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 27 08:27:50.737710 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 27 08:27:50.738015 kernel: scsi host0: ahci Oct 27 08:27:50.738235 kernel: scsi host1: ahci Oct 27 08:27:50.738429 kernel: scsi host2: ahci Oct 27 08:27:50.738636 kernel: scsi host3: ahci Oct 27 08:27:50.738871 kernel: scsi host4: ahci Oct 27 08:27:50.740202 kernel: scsi host5: ahci Oct 27 08:27:50.740230 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 27 08:27:50.740240 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 27 08:27:50.740249 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 27 08:27:50.740259 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 27 08:27:50.740268 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 27 08:27:50.740277 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 27 08:27:50.740289 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 27 08:27:50.740298 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 27 08:27:50.740307 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 27 08:27:50.740316 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 27 08:27:50.740325 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 27 08:27:50.740341 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 27 08:27:50.740350 kernel: ata3.00: LPM support broken, forcing max_power Oct 27 08:27:50.740363 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 27 08:27:50.740372 kernel: ata3.00: applying bridge limits Oct 27 08:27:50.740381 kernel: ata3.00: LPM support broken, forcing max_power Oct 27 08:27:50.740397 kernel: ata3.00: configured for UDMA/100 Oct 27 08:27:50.740669 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 27 08:27:50.740899 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 27 08:27:50.741146 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 27 08:27:50.741167 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 08:27:50.741177 kernel: GPT:16515071 != 27000831 Oct 27 08:27:50.741191 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 08:27:50.741215 kernel: GPT:16515071 != 27000831 Oct 27 08:27:50.741232 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 08:27:50.741243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 08:27:50.741256 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:27:50.742687 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 27 08:27:50.742715 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 27 08:27:50.742929 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 27 08:27:50.742961 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 08:27:50.742974 kernel: device-mapper: uevent: version 1.0.3 Oct 27 08:27:50.742984 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 27 08:27:50.742998 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 27 08:27:50.743008 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:27:50.743017 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:27:50.743026 kernel: raid6: avx2x4 gen() 19662 MB/s Oct 27 08:27:50.743035 kernel: raid6: avx2x2 gen() 19911 MB/s Oct 27 08:27:50.743044 kernel: raid6: avx2x1 gen() 22092 MB/s Oct 27 08:27:50.743053 kernel: raid6: using algorithm avx2x1 gen() 22092 MB/s Oct 27 08:27:50.743062 kernel: raid6: .... xor() 15473 MB/s, rmw enabled Oct 27 08:27:50.743074 kernel: raid6: using avx2x2 recovery algorithm Oct 27 08:27:50.743084 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:27:50.743092 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:27:50.743104 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:27:50.743113 kernel: xor: automatically using best checksumming function avx Oct 27 08:27:50.743125 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:27:50.743134 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 08:27:50.743143 kernel: BTRFS: device fsid bf514789-bcec-4c15-ac9d-e4c3d19a42b2 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (176) Oct 27 08:27:50.743155 kernel: BTRFS info (device dm-0): first mount of filesystem bf514789-bcec-4c15-ac9d-e4c3d19a42b2 Oct 27 08:27:50.743165 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:27:50.743174 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 08:27:50.743183 kernel: BTRFS info (device dm-0): enabling free space tree Oct 27 08:27:50.743193 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 27 08:27:50.743201 kernel: loop: module loaded Oct 27 08:27:50.743210 kernel: loop0: detected capacity change from 0 to 100120 Oct 27 08:27:50.743224 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 08:27:50.743237 systemd[1]: Successfully made /usr/ read-only. Oct 27 08:27:50.743265 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:27:50.743284 systemd[1]: Detected virtualization kvm. Oct 27 08:27:50.743308 systemd[1]: Detected architecture x86-64. Oct 27 08:27:50.743334 systemd[1]: Running in initrd. Oct 27 08:27:50.743355 systemd[1]: No hostname configured, using default hostname. Oct 27 08:27:50.743377 systemd[1]: Hostname set to . Oct 27 08:27:50.743395 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:27:50.743420 systemd[1]: Queued start job for default target initrd.target. Oct 27 08:27:50.743838 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:27:50.743849 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:27:50.743890 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:27:50.743915 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 08:27:50.743963 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:27:50.743992 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 08:27:50.744022 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 08:27:50.744053 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:27:50.744078 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:27:50.744107 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:27:50.744117 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:27:50.744126 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:27:50.744136 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:27:50.744145 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:27:50.744158 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:27:50.744168 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:27:50.744184 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 08:27:50.744195 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 08:27:50.744204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:27:50.744216 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:27:50.744228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:27:50.744390 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:27:50.744409 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 08:27:50.744419 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 08:27:50.744429 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:27:50.744452 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 08:27:50.744464 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 27 08:27:50.744473 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 08:27:50.744495 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:27:50.744507 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:27:50.744517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:27:50.744532 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 08:27:50.744564 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:27:50.744574 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 08:27:50.744593 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:27:50.744993 systemd-journald[311]: Collecting audit messages is disabled. Oct 27 08:27:50.745031 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 08:27:50.745041 kernel: Bridge firewalling registered Oct 27 08:27:50.745051 systemd-journald[311]: Journal started Oct 27 08:27:50.745075 systemd-journald[311]: Runtime Journal (/run/log/journal/01e5fa2ffa4a47749b5e57d9af4e7745) is 5.9M, max 47.9M, 41.9M free. Oct 27 08:27:50.742803 systemd-modules-load[313]: Inserted module 'br_netfilter' Oct 27 08:27:50.748365 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:27:50.750790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:27:50.754405 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:27:50.759731 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:27:50.761456 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:27:50.764355 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:27:50.770843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:50.780364 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 27 08:27:50.780387 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 08:27:50.791609 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:27:50.795052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:27:50.799074 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:27:50.802323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:27:50.817207 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:27:50.823360 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 08:27:50.858276 dracut-cmdline[355]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:27:50.879979 systemd-resolved[347]: Positive Trust Anchors: Oct 27 08:27:50.880000 systemd-resolved[347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:27:50.880004 systemd-resolved[347]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:27:50.880036 systemd-resolved[347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:27:50.914663 systemd-resolved[347]: Defaulting to hostname 'linux'. Oct 27 08:27:50.916667 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:27:50.918823 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:27:51.030996 kernel: Loading iSCSI transport class v2.0-870. Oct 27 08:27:51.045978 kernel: iscsi: registered transport (tcp) Oct 27 08:27:51.072578 kernel: iscsi: registered transport (qla4xxx) Oct 27 08:27:51.072614 kernel: QLogic iSCSI HBA Driver Oct 27 08:27:51.105806 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:27:51.145959 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:27:51.149526 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:27:51.241488 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 08:27:51.245376 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 08:27:51.247323 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 08:27:51.316591 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:27:51.321131 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:27:51.358849 systemd-udevd[594]: Using default interface naming scheme 'v257'. Oct 27 08:27:51.378733 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:27:51.382695 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 08:27:51.435263 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:27:51.446117 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:27:51.456362 dracut-pre-trigger[654]: rd.md=0: removing MD RAID activation Oct 27 08:27:51.498166 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:27:51.504495 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:27:51.514998 systemd-networkd[698]: lo: Link UP Oct 27 08:27:51.515008 systemd-networkd[698]: lo: Gained carrier Oct 27 08:27:51.517560 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:27:51.519617 systemd[1]: Reached target network.target - Network. Oct 27 08:27:51.610665 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:27:51.614092 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 08:27:51.684779 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 27 08:27:51.713988 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 27 08:27:51.728174 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 27 08:27:51.747902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 08:27:51.755986 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 08:27:51.764817 kernel: cryptd: max_cpu_qlen set to 1000 Oct 27 08:27:51.775984 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 27 08:27:51.777968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:27:51.778122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:51.781107 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:27:51.790861 disk-uuid[769]: Primary Header is updated. Oct 27 08:27:51.790861 disk-uuid[769]: Secondary Entries is updated. Oct 27 08:27:51.790861 disk-uuid[769]: Secondary Header is updated. Oct 27 08:27:51.788606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:27:51.794023 systemd-networkd[698]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:27:51.794039 systemd-networkd[698]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:27:51.798038 systemd-networkd[698]: eth0: Link UP Oct 27 08:27:51.799520 systemd-networkd[698]: eth0: Gained carrier Oct 27 08:27:51.799541 systemd-networkd[698]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:27:51.826182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:27:51.826331 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:51.838769 kernel: AES CTR mode by8 optimization enabled Oct 27 08:27:51.840023 systemd-networkd[698]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 08:27:51.856916 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:27:51.934672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:51.946537 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 08:27:51.949455 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:27:51.951969 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:27:51.956096 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:27:51.959012 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 08:27:52.001803 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:27:52.884518 disk-uuid[771]: Warning: The kernel is still using the old partition table. Oct 27 08:27:52.884518 disk-uuid[771]: The new table will be used at the next reboot or after you Oct 27 08:27:52.884518 disk-uuid[771]: run partprobe(8) or kpartx(8) Oct 27 08:27:52.884518 disk-uuid[771]: The operation has completed successfully. Oct 27 08:27:52.896588 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 08:27:52.896767 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 08:27:52.900248 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 08:27:52.947259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (858) Oct 27 08:27:52.947336 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:27:52.947353 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:27:52.952544 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:27:52.952591 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:27:52.960972 kernel: BTRFS info (device vda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:27:52.961521 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 08:27:52.964534 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 08:27:53.068609 ignition[877]: Ignition 2.22.0 Oct 27 08:27:53.068633 ignition[877]: Stage: fetch-offline Oct 27 08:27:53.068673 ignition[877]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:53.068686 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:27:53.068776 ignition[877]: parsed url from cmdline: "" Oct 27 08:27:53.068784 ignition[877]: no config URL provided Oct 27 08:27:53.068789 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 08:27:53.068801 ignition[877]: no config at "/usr/lib/ignition/user.ign" Oct 27 08:27:53.068850 ignition[877]: op(1): [started] loading QEMU firmware config module Oct 27 08:27:53.068856 ignition[877]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 27 08:27:53.077522 ignition[877]: op(1): [finished] loading QEMU firmware config module Oct 27 08:27:53.077542 ignition[877]: QEMU firmware config was not found. Ignoring... Oct 27 08:27:53.159322 ignition[877]: parsing config with SHA512: a87dcf22cd1239b0c0c2354dd87040d21065d953384b71201703d9401cf5175e98349241b3a86a3103f726fc9f523352380280fc1c12c6d93e9b3fad3bdb77b1 Oct 27 08:27:53.164525 unknown[877]: fetched base config from "system" Oct 27 08:27:53.164541 unknown[877]: fetched user config from "qemu" Oct 27 08:27:53.164908 ignition[877]: fetch-offline: fetch-offline passed Oct 27 08:27:53.164987 ignition[877]: Ignition finished successfully Oct 27 08:27:53.169045 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:27:53.172188 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 27 08:27:53.173182 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 08:27:53.215425 ignition[887]: Ignition 2.22.0 Oct 27 08:27:53.215440 ignition[887]: Stage: kargs Oct 27 08:27:53.215593 ignition[887]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:53.215604 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:27:53.216324 ignition[887]: kargs: kargs passed Oct 27 08:27:53.221582 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 08:27:53.216371 ignition[887]: Ignition finished successfully Oct 27 08:27:53.224653 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 08:27:53.260296 ignition[895]: Ignition 2.22.0 Oct 27 08:27:53.260310 ignition[895]: Stage: disks Oct 27 08:27:53.260454 ignition[895]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:53.260465 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:27:53.264035 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 08:27:53.261161 ignition[895]: disks: disks passed Oct 27 08:27:53.264882 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 08:27:53.261208 ignition[895]: Ignition finished successfully Oct 27 08:27:53.265244 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 08:27:53.265802 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:27:53.273979 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:27:53.277502 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:27:53.281308 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 08:27:53.335134 systemd-fsck[905]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 27 08:27:53.343159 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 08:27:53.348548 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 08:27:53.465988 kernel: EXT4-fs (vda9): mounted filesystem e90e2fe3-e1db-4bff-abac-c8d1d032f674 r/w with ordered data mode. Quota mode: none. Oct 27 08:27:53.466562 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 08:27:53.467924 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 08:27:53.472904 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:27:53.474542 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 08:27:53.476550 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 27 08:27:53.476590 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 08:27:53.476628 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:27:53.494142 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 08:27:53.497112 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 08:27:53.505881 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Oct 27 08:27:53.505913 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:27:53.506005 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:27:53.509132 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:27:53.509155 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:27:53.510469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:27:53.560765 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 08:27:53.566981 initrd-setup-root[945]: cut: /sysroot/etc/group: No such file or directory Oct 27 08:27:53.572887 initrd-setup-root[952]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 08:27:53.578960 initrd-setup-root[959]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 08:27:53.680677 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 08:27:53.684342 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 08:27:53.686054 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 08:27:53.711009 kernel: BTRFS info (device vda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:27:53.731194 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 08:27:53.744067 systemd-networkd[698]: eth0: Gained IPv6LL Oct 27 08:27:53.752524 ignition[1028]: INFO : Ignition 2.22.0 Oct 27 08:27:53.752524 ignition[1028]: INFO : Stage: mount Oct 27 08:27:53.755208 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:53.755208 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:27:53.755208 ignition[1028]: INFO : mount: mount passed Oct 27 08:27:53.755208 ignition[1028]: INFO : Ignition finished successfully Oct 27 08:27:53.763677 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 08:27:53.768417 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 08:27:53.936715 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 08:27:53.938664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:27:53.961966 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1040) Oct 27 08:27:53.962021 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:27:53.962035 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:27:53.967172 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:27:53.967241 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:27:53.968965 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:27:54.011555 ignition[1057]: INFO : Ignition 2.22.0 Oct 27 08:27:54.011555 ignition[1057]: INFO : Stage: files Oct 27 08:27:54.014084 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:54.014084 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:27:54.018088 ignition[1057]: DEBUG : files: compiled without relabeling support, skipping Oct 27 08:27:54.020437 ignition[1057]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 08:27:54.020437 ignition[1057]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 08:27:54.028290 ignition[1057]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 08:27:54.030606 ignition[1057]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 08:27:54.033225 unknown[1057]: wrote ssh authorized keys file for user: core Oct 27 08:27:54.034978 ignition[1057]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 08:27:54.034978 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:27:54.034978 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 27 08:27:54.069345 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 08:27:54.140824 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:27:54.144072 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 27 08:27:54.144072 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 08:27:54.144072 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:27:54.144072 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:27:54.144072 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:27:54.144072 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:27:54.144072 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:27:54.144072 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:27:54.167141 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:27:54.167141 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:27:54.167141 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:27:54.167141 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:27:54.167141 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:27:54.167141 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 27 08:27:54.447064 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 27 08:27:54.845172 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:27:54.845172 ignition[1057]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 27 08:27:54.850893 ignition[1057]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:27:54.858040 ignition[1057]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:27:54.858040 ignition[1057]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 27 08:27:54.858040 ignition[1057]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 27 08:27:54.865352 ignition[1057]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 08:27:54.865352 ignition[1057]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 08:27:54.865352 ignition[1057]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 27 08:27:54.865352 ignition[1057]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 27 08:27:54.894864 ignition[1057]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 08:27:54.900230 ignition[1057]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 08:27:54.902807 ignition[1057]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 27 08:27:54.902807 ignition[1057]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 27 08:27:54.902807 ignition[1057]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 08:27:54.902807 ignition[1057]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:27:54.902807 ignition[1057]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:27:54.902807 ignition[1057]: INFO : files: files passed Oct 27 08:27:54.902807 ignition[1057]: INFO : Ignition finished successfully Oct 27 08:27:54.909097 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 08:27:54.912006 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 08:27:54.920552 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 08:27:54.941831 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 08:27:54.942049 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 08:27:54.950724 initrd-setup-root-after-ignition[1088]: grep: /sysroot/oem/oem-release: No such file or directory Oct 27 08:27:54.953868 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:27:54.953868 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:27:54.959814 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:27:54.964315 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:27:54.965456 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 08:27:54.966716 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 08:27:55.034572 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 08:27:55.034712 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 08:27:55.035795 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 08:27:55.036433 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 08:27:55.044199 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 08:27:55.045210 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 08:27:55.084314 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:27:55.086690 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 08:27:55.117357 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:27:55.117502 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:27:55.121262 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:27:55.122508 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 08:27:55.127652 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 08:27:55.127790 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:27:55.132864 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 08:27:55.133748 systemd[1]: Stopped target basic.target - Basic System. Oct 27 08:27:55.138602 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 08:27:55.141344 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:27:55.144426 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 08:27:55.148042 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:27:55.151434 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 08:27:55.154853 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:27:55.159319 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 08:27:55.160481 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 08:27:55.164892 systemd[1]: Stopped target swap.target - Swaps. Oct 27 08:27:55.167921 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 08:27:55.168061 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:27:55.172926 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:27:55.173779 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:27:55.179541 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 08:27:55.181370 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:27:55.184821 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 08:27:55.184958 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 08:27:55.186241 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 08:27:55.186357 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:27:55.192564 systemd[1]: Stopped target paths.target - Path Units. Oct 27 08:27:55.195613 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 08:27:55.198257 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:27:55.201626 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 08:27:55.204675 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 08:27:55.208418 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 08:27:55.208526 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:27:55.209598 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 08:27:55.209684 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:27:55.213515 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 08:27:55.213653 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:27:55.216096 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 08:27:55.216209 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 08:27:55.221771 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 08:27:55.229897 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 08:27:55.232659 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 08:27:55.232787 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:27:55.236194 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 08:27:55.236305 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:27:55.237659 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 08:27:55.237764 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:27:55.253424 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 08:27:55.253565 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 08:27:55.267534 ignition[1114]: INFO : Ignition 2.22.0 Oct 27 08:27:55.267534 ignition[1114]: INFO : Stage: umount Oct 27 08:27:55.270119 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:27:55.270119 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:27:55.270119 ignition[1114]: INFO : umount: umount passed Oct 27 08:27:55.270119 ignition[1114]: INFO : Ignition finished successfully Oct 27 08:27:55.271165 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 08:27:55.271362 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 08:27:55.272853 systemd[1]: Stopped target network.target - Network. Oct 27 08:27:55.273495 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 08:27:55.273570 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 08:27:55.274371 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 08:27:55.274436 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 08:27:55.282536 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 08:27:55.282611 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 08:27:55.285694 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 08:27:55.285755 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 08:27:55.286618 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 08:27:55.293596 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 08:27:55.297662 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 08:27:55.301851 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 08:27:55.302035 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 08:27:55.307754 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 08:27:55.307899 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 08:27:55.312778 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 27 08:27:55.313762 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 08:27:55.313837 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:27:55.317992 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 08:27:55.320690 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 08:27:55.320754 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:27:55.324672 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 08:27:55.324731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:27:55.325776 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 08:27:55.325827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 08:27:55.334842 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:27:55.353475 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 08:27:55.353611 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 08:27:55.356258 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 08:27:55.356372 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 08:27:55.369867 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 08:27:55.376433 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:27:55.377792 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 08:27:55.377842 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 08:27:55.383805 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 08:27:55.383845 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:27:55.384678 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 08:27:55.384732 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:27:55.391861 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 08:27:55.391914 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 08:27:55.396469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 08:27:55.396522 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:27:55.402262 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 08:27:55.402656 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 27 08:27:55.402730 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:27:55.403536 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 08:27:55.403597 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:27:55.411019 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 27 08:27:55.411079 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:27:55.411544 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 08:27:55.411599 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:27:55.418696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:27:55.418750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:55.422869 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 08:27:55.427386 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 08:27:55.431693 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 08:27:55.431812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 08:27:55.436933 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 08:27:55.441718 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 08:27:55.467504 systemd[1]: Switching root. Oct 27 08:27:55.503279 systemd-journald[311]: Journal stopped Oct 27 08:27:57.060143 systemd-journald[311]: Received SIGTERM from PID 1 (systemd). Oct 27 08:27:57.060216 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 08:27:57.060231 kernel: SELinux: policy capability open_perms=1 Oct 27 08:27:57.060244 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 08:27:57.060285 kernel: SELinux: policy capability always_check_network=0 Oct 27 08:27:57.060299 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 08:27:57.060316 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 08:27:57.060329 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 08:27:57.060341 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 08:27:57.060356 kernel: SELinux: policy capability userspace_initial_context=0 Oct 27 08:27:57.060374 kernel: audit: type=1403 audit(1761553676.185:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 08:27:57.060397 systemd[1]: Successfully loaded SELinux policy in 70.149ms. Oct 27 08:27:57.060418 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.352ms. Oct 27 08:27:57.060432 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:27:57.060445 systemd[1]: Detected virtualization kvm. Oct 27 08:27:57.060458 systemd[1]: Detected architecture x86-64. Oct 27 08:27:57.060471 systemd[1]: Detected first boot. Oct 27 08:27:57.060487 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:27:57.060516 zram_generator::config[1158]: No configuration found. Oct 27 08:27:57.060530 kernel: Guest personality initialized and is inactive Oct 27 08:27:57.060543 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 27 08:27:57.060555 kernel: Initialized host personality Oct 27 08:27:57.060567 kernel: NET: Registered PF_VSOCK protocol family Oct 27 08:27:57.060580 systemd[1]: Populated /etc with preset unit settings. Oct 27 08:27:57.060593 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 08:27:57.060614 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 08:27:57.060628 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 08:27:57.060643 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 08:27:57.060656 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 08:27:57.060669 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 08:27:57.060686 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 08:27:57.060708 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 08:27:57.060726 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 08:27:57.060739 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 08:27:57.060757 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 08:27:57.060770 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:27:57.060784 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:27:57.060797 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 08:27:57.060818 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 08:27:57.060832 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 08:27:57.060845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:27:57.060859 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 27 08:27:57.060872 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:27:57.060885 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:27:57.060906 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 08:27:57.060920 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 08:27:57.060933 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 08:27:57.060966 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 08:27:57.060979 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:27:57.060992 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:27:57.061005 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:27:57.061017 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:27:57.061040 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 08:27:57.061054 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 08:27:57.061066 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 08:27:57.061079 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:27:57.061092 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:27:57.061107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:27:57.061120 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 08:27:57.061142 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 08:27:57.061155 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 08:27:57.061167 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 08:27:57.061181 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:57.061193 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 08:27:57.061206 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 08:27:57.061219 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 08:27:57.061241 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 08:27:57.061254 systemd[1]: Reached target machines.target - Containers. Oct 27 08:27:57.061267 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 08:27:57.061280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:27:57.061293 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:27:57.061307 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 08:27:57.061328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:27:57.061341 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:27:57.061354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:27:57.061367 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 08:27:57.061380 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:27:57.061394 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 08:27:57.061407 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 08:27:57.061428 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 08:27:57.061441 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 08:27:57.061454 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 08:27:57.061467 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:27:57.061480 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:27:57.061507 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:27:57.061521 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:27:57.061543 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 08:27:57.061556 kernel: ACPI: bus type drm_connector registered Oct 27 08:27:57.061568 kernel: fuse: init (API version 7.41) Oct 27 08:27:57.061581 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 08:27:57.061594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:27:57.061616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:57.061629 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 08:27:57.061643 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 08:27:57.061656 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 08:27:57.061688 systemd-journald[1240]: Collecting audit messages is disabled. Oct 27 08:27:57.061721 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 08:27:57.061734 systemd-journald[1240]: Journal started Oct 27 08:27:57.061758 systemd-journald[1240]: Runtime Journal (/run/log/journal/01e5fa2ffa4a47749b5e57d9af4e7745) is 5.9M, max 47.9M, 41.9M free. Oct 27 08:27:56.748379 systemd[1]: Queued start job for default target multi-user.target. Oct 27 08:27:56.768092 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 27 08:27:56.768648 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 08:27:57.065962 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:27:57.068636 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 08:27:57.070719 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 08:27:57.072685 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 08:27:57.075163 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:27:57.077774 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 08:27:57.078013 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 08:27:57.080280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:27:57.080509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:27:57.082666 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:27:57.082884 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:27:57.084918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:27:57.085247 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:27:57.087549 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 08:27:57.087771 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 08:27:57.089846 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:27:57.090180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:27:57.092463 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:27:57.094728 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:27:57.097823 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 08:27:57.100334 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 08:27:57.117307 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:27:57.119930 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 27 08:27:57.123307 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 08:27:57.126212 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 08:27:57.128165 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 08:27:57.128261 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:27:57.130917 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 08:27:57.133424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:27:57.142101 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 08:27:57.145072 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 08:27:57.147260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:27:57.148531 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 08:27:57.150508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:27:57.152704 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:27:57.156587 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 08:27:57.160647 systemd-journald[1240]: Time spent on flushing to /var/log/journal/01e5fa2ffa4a47749b5e57d9af4e7745 is 19.827ms for 1034 entries. Oct 27 08:27:57.160647 systemd-journald[1240]: System Journal (/var/log/journal/01e5fa2ffa4a47749b5e57d9af4e7745) is 8M, max 163.5M, 155.5M free. Oct 27 08:27:57.194171 systemd-journald[1240]: Received client request to flush runtime journal. Oct 27 08:27:57.194230 kernel: loop1: detected capacity change from 0 to 229808 Oct 27 08:27:57.160809 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:27:57.165448 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:27:57.170384 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 08:27:57.173991 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 08:27:57.179437 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 08:27:57.182308 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 08:27:57.190204 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 08:27:57.192793 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:27:57.196478 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 08:27:57.206175 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Oct 27 08:27:57.206193 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Oct 27 08:27:57.212678 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:27:57.216529 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 08:27:57.218509 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 08:27:57.223113 kernel: loop2: detected capacity change from 0 to 110984 Oct 27 08:27:57.247965 kernel: loop3: detected capacity change from 0 to 128048 Oct 27 08:27:57.254205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 08:27:57.258518 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:27:57.262323 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:27:57.270104 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 08:27:57.280988 kernel: loop4: detected capacity change from 0 to 229808 Oct 27 08:27:57.288970 kernel: loop5: detected capacity change from 0 to 110984 Oct 27 08:27:57.295075 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Oct 27 08:27:57.295371 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Oct 27 08:27:57.298971 kernel: loop6: detected capacity change from 0 to 128048 Oct 27 08:27:57.301052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:27:57.309681 (sd-merge)[1302]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 27 08:27:57.314217 (sd-merge)[1302]: Merged extensions into '/usr'. Oct 27 08:27:57.315535 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 08:27:57.323230 systemd[1]: Reload requested from client PID 1277 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 08:27:57.323251 systemd[1]: Reloading... Oct 27 08:27:57.393997 zram_generator::config[1337]: No configuration found. Oct 27 08:27:57.409018 systemd-resolved[1299]: Positive Trust Anchors: Oct 27 08:27:57.409386 systemd-resolved[1299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:27:57.409438 systemd-resolved[1299]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:27:57.409518 systemd-resolved[1299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:27:57.413313 systemd-resolved[1299]: Defaulting to hostname 'linux'. Oct 27 08:27:57.592736 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 08:27:57.592842 systemd[1]: Reloading finished in 269 ms. Oct 27 08:27:57.625752 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:27:57.628638 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 08:27:57.634165 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:27:57.664323 systemd[1]: Starting ensure-sysext.service... Oct 27 08:27:57.667492 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:27:57.686673 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 27 08:27:57.686722 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 27 08:27:57.686804 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Oct 27 08:27:57.686821 systemd[1]: Reloading... Oct 27 08:27:57.687083 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 08:27:57.687356 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 08:27:57.688396 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 08:27:57.688682 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Oct 27 08:27:57.688750 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Oct 27 08:27:57.694375 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:27:57.694387 systemd-tmpfiles[1374]: Skipping /boot Oct 27 08:27:57.709604 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:27:57.709619 systemd-tmpfiles[1374]: Skipping /boot Oct 27 08:27:57.753120 zram_generator::config[1404]: No configuration found. Oct 27 08:27:57.927715 systemd[1]: Reloading finished in 240 ms. Oct 27 08:27:57.954784 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 08:27:57.984848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:27:57.995988 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:27:57.998623 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 08:27:58.001785 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 08:27:58.013985 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 08:27:58.021422 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:27:58.026175 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 08:27:58.031952 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:58.032127 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:27:58.033489 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:27:58.043478 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:27:58.047785 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:27:58.049734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:27:58.049845 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:27:58.049965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:58.052228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:27:58.057350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:27:58.063267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:27:58.063492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:27:58.071274 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 08:27:58.073162 systemd-udevd[1448]: Using default interface naming scheme 'v257'. Oct 27 08:27:58.073841 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:27:58.074087 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:27:58.085425 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:58.085645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:27:58.088150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:27:58.094183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:27:58.099391 augenrules[1478]: No rules Oct 27 08:27:58.101353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:27:58.103361 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:27:58.103495 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:27:58.103608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:58.104802 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:27:58.105624 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:27:58.108366 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 08:27:58.112450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:27:58.115091 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:27:58.117675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:27:58.117897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:27:58.122567 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:27:58.122819 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:27:58.127135 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:27:58.138515 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 08:27:58.147437 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:58.148766 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:27:58.150580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:27:58.151877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:27:58.155215 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:27:58.159903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:27:58.173147 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:27:58.175181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:27:58.175227 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:27:58.178298 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:27:58.181116 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 08:27:58.181149 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:27:58.181770 systemd[1]: Finished ensure-sysext.service. Oct 27 08:27:58.193814 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 08:27:58.196452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:27:58.196725 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:27:58.203424 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:27:58.203679 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:27:58.206132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:27:58.206362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:27:58.210473 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:27:58.210698 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:27:58.224504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:27:58.224795 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:27:58.229696 augenrules[1499]: /sbin/augenrules: No change Oct 27 08:27:58.229668 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 27 08:27:58.248484 augenrules[1542]: No rules Oct 27 08:27:58.254574 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:27:58.255420 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:27:58.295446 kernel: mousedev: PS/2 mouse device common for all mice Oct 27 08:27:58.321318 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 08:27:58.323641 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 08:27:58.330415 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 27 08:27:58.333880 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 08:27:58.339264 kernel: ACPI: button: Power Button [PWRF] Oct 27 08:27:58.338169 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 08:27:58.351759 systemd-networkd[1513]: lo: Link UP Oct 27 08:27:58.351776 systemd-networkd[1513]: lo: Gained carrier Oct 27 08:27:58.355850 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:27:58.358134 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 27 08:27:58.358930 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:27:58.358954 systemd-networkd[1513]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:27:58.362652 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 27 08:27:58.362901 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 27 08:27:58.362097 systemd-networkd[1513]: eth0: Link UP Oct 27 08:27:58.362381 systemd-networkd[1513]: eth0: Gained carrier Oct 27 08:27:58.362395 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:27:58.363398 systemd[1]: Reached target network.target - Network. Oct 27 08:27:58.367170 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 08:27:58.373191 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 08:27:58.375786 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 08:27:58.377988 systemd-networkd[1513]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 08:27:58.381702 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Oct 27 08:27:58.384414 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 27 08:27:58.384486 systemd-timesyncd[1522]: Initial clock synchronization to Mon 2025-10-27 08:27:58.512118 UTC. Oct 27 08:27:58.445449 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 08:27:58.646477 kernel: kvm_amd: TSC scaling supported Oct 27 08:27:58.646584 kernel: kvm_amd: Nested Virtualization enabled Oct 27 08:27:58.646602 kernel: kvm_amd: Nested Paging enabled Oct 27 08:27:58.648913 kernel: kvm_amd: LBR virtualization supported Oct 27 08:27:58.648953 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 27 08:27:58.649093 kernel: kvm_amd: Virtual GIF supported Oct 27 08:27:58.666202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:27:58.676255 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:27:58.676553 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:58.681397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:27:58.718985 kernel: EDAC MC: Ver: 3.0.0 Oct 27 08:27:58.764005 ldconfig[1445]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 08:27:58.771512 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 08:27:58.774020 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:27:58.780307 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 08:27:58.801531 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 08:27:58.803566 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:27:58.805379 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 08:27:58.807402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 08:27:58.809423 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 27 08:27:58.811485 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 08:27:58.813322 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 08:27:58.815332 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 08:27:58.817348 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 08:27:58.817386 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:27:58.818855 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:27:58.821323 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 08:27:58.825427 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 08:27:58.830039 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 08:27:58.832293 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 08:27:58.834330 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 08:27:58.839893 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 08:27:58.841931 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 08:27:58.844463 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 08:27:58.846907 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:27:58.848482 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:27:58.850026 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:27:58.850058 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:27:58.851213 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 08:27:58.854105 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 08:27:58.856627 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 08:27:58.859544 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 08:27:58.862173 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 08:27:58.862813 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 08:27:58.863919 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 27 08:27:58.880210 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 08:27:58.885324 jq[1596]: false Oct 27 08:27:58.885041 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 08:27:58.887033 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing passwd entry cache Oct 27 08:27:58.886976 oslogin_cache_refresh[1598]: Refreshing passwd entry cache Oct 27 08:27:58.889055 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 08:27:58.894653 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting users, quitting Oct 27 08:27:58.894653 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:27:58.894653 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing group entry cache Oct 27 08:27:58.894485 oslogin_cache_refresh[1598]: Failure getting users, quitting Oct 27 08:27:58.894508 oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:27:58.894562 oslogin_cache_refresh[1598]: Refreshing group entry cache Oct 27 08:27:58.898889 extend-filesystems[1597]: Found /dev/vda6 Oct 27 08:27:58.897211 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 08:27:58.902342 oslogin_cache_refresh[1598]: Failure getting groups, quitting Oct 27 08:27:58.904403 extend-filesystems[1597]: Found /dev/vda9 Oct 27 08:27:58.905586 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting groups, quitting Oct 27 08:27:58.905586 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:27:58.902352 oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:27:58.906218 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 08:27:58.907138 extend-filesystems[1597]: Checking size of /dev/vda9 Oct 27 08:27:58.907858 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 08:27:58.908422 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 08:27:58.909165 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 08:27:58.912619 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 08:27:58.918467 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 08:27:58.920802 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 08:27:58.925875 extend-filesystems[1597]: Resized partition /dev/vda9 Oct 27 08:27:58.926792 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 08:27:58.927188 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 27 08:27:58.927434 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 27 08:27:58.932663 jq[1616]: true Oct 27 08:27:58.933467 extend-filesystems[1624]: resize2fs 1.47.3 (8-Jul-2025) Oct 27 08:27:58.936458 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 08:27:58.936756 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 08:27:58.940760 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 27 08:27:58.958652 update_engine[1611]: I20251027 08:27:58.958551 1611 main.cc:92] Flatcar Update Engine starting Oct 27 08:27:58.962964 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 27 08:27:58.986857 extend-filesystems[1624]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 27 08:27:58.986857 extend-filesystems[1624]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 27 08:27:58.986857 extend-filesystems[1624]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 27 08:27:58.995324 extend-filesystems[1597]: Resized filesystem in /dev/vda9 Oct 27 08:27:58.999700 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 08:27:59.000038 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 08:27:59.004457 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 08:27:59.004729 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 08:27:59.019779 (ntainerd)[1635]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 08:27:59.026081 jq[1633]: true Oct 27 08:27:59.050973 tar[1631]: linux-amd64/LICENSE Oct 27 08:27:59.050973 tar[1631]: linux-amd64/helm Oct 27 08:27:59.063755 dbus-daemon[1594]: [system] SELinux support is enabled Oct 27 08:27:59.063975 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 08:27:59.068788 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 08:27:59.068831 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 08:27:59.072653 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 08:27:59.072674 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 08:27:59.077316 systemd[1]: Started update-engine.service - Update Engine. Oct 27 08:27:59.077600 systemd-logind[1609]: Watching system buttons on /dev/input/event2 (Power Button) Oct 27 08:27:59.077915 systemd-logind[1609]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 27 08:27:59.079716 update_engine[1611]: I20251027 08:27:59.079315 1611 update_check_scheduler.cc:74] Next update check in 3m22s Oct 27 08:27:59.079902 systemd-logind[1609]: New seat seat0. Oct 27 08:27:59.082744 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 08:27:59.085226 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 08:27:59.136983 bash[1664]: Updated "/home/core/.ssh/authorized_keys" Oct 27 08:27:59.139006 locksmithd[1657]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 08:27:59.139978 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 08:27:59.143278 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 27 08:27:59.174658 sshd_keygen[1621]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 08:27:59.203799 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 08:27:59.207573 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 08:27:59.223609 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 08:27:59.223899 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 08:27:59.229828 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 08:27:59.252328 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 08:27:59.256992 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 08:27:59.260593 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 27 08:27:59.263187 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 08:27:59.284732 containerd[1635]: time="2025-10-27T08:27:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 27 08:27:59.285609 containerd[1635]: time="2025-10-27T08:27:59.285555028Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 27 08:27:59.294308 containerd[1635]: time="2025-10-27T08:27:59.294255303Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.331µs" Oct 27 08:27:59.294308 containerd[1635]: time="2025-10-27T08:27:59.294282984Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 27 08:27:59.294308 containerd[1635]: time="2025-10-27T08:27:59.294301435Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.294458705Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.294480064Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.294503756Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.294579569Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.294592414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.294859409Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.294872830Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.294883949Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.294892119Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.294999924Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 27 08:27:59.295348 containerd[1635]: time="2025-10-27T08:27:59.295227261Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:27:59.295676 containerd[1635]: time="2025-10-27T08:27:59.295257063Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:27:59.295676 containerd[1635]: time="2025-10-27T08:27:59.295267758Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 27 08:27:59.295676 containerd[1635]: time="2025-10-27T08:27:59.295318535Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 27 08:27:59.295676 containerd[1635]: time="2025-10-27T08:27:59.295574471Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 27 08:27:59.295676 containerd[1635]: time="2025-10-27T08:27:59.295642870Z" level=info msg="metadata content store policy set" policy=shared Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302269843Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302317874Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302333447Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302345454Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302363955Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302375569Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302395767Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302425124Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302437950Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302448948Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302458511Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302470902Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302593452Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 27 08:27:59.302971 containerd[1635]: time="2025-10-27T08:27:59.302612429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302626406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302636797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302648209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302658469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302669770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302680010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302691018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302706439Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302717608Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302786513Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302800642Z" level=info msg="Start snapshots syncer" Oct 27 08:27:59.303238 containerd[1635]: time="2025-10-27T08:27:59.302855974Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 27 08:27:59.303658 containerd[1635]: time="2025-10-27T08:27:59.303617268Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 27 08:27:59.303822 containerd[1635]: time="2025-10-27T08:27:59.303805753Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 27 08:27:59.304002 containerd[1635]: time="2025-10-27T08:27:59.303984120Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 27 08:27:59.304201 containerd[1635]: time="2025-10-27T08:27:59.304173101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 27 08:27:59.304263 containerd[1635]: time="2025-10-27T08:27:59.304251196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 27 08:27:59.304316 containerd[1635]: time="2025-10-27T08:27:59.304305548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 27 08:27:59.304374 containerd[1635]: time="2025-10-27T08:27:59.304360648Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 27 08:27:59.304440 containerd[1635]: time="2025-10-27T08:27:59.304426037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 27 08:27:59.304490 containerd[1635]: time="2025-10-27T08:27:59.304478864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 27 08:27:59.304537 containerd[1635]: time="2025-10-27T08:27:59.304526936Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 27 08:27:59.304614 containerd[1635]: time="2025-10-27T08:27:59.304600364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 27 08:27:59.304663 containerd[1635]: time="2025-10-27T08:27:59.304652525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 27 08:27:59.304709 containerd[1635]: time="2025-10-27T08:27:59.304698254Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 27 08:27:59.304800 containerd[1635]: time="2025-10-27T08:27:59.304785750Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:27:59.304899 containerd[1635]: time="2025-10-27T08:27:59.304877650Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:27:59.304962 containerd[1635]: time="2025-10-27T08:27:59.304936446Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:27:59.305012 containerd[1635]: time="2025-10-27T08:27:59.304999240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:27:59.305055 containerd[1635]: time="2025-10-27T08:27:59.305044817Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 27 08:27:59.305109 containerd[1635]: time="2025-10-27T08:27:59.305095806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 27 08:27:59.305156 containerd[1635]: time="2025-10-27T08:27:59.305145755Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 27 08:27:59.305223 containerd[1635]: time="2025-10-27T08:27:59.305211883Z" level=info msg="runtime interface created" Oct 27 08:27:59.305265 containerd[1635]: time="2025-10-27T08:27:59.305255995Z" level=info msg="created NRI interface" Oct 27 08:27:59.305308 containerd[1635]: time="2025-10-27T08:27:59.305297612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 27 08:27:59.305351 containerd[1635]: time="2025-10-27T08:27:59.305341694Z" level=info msg="Connect containerd service" Oct 27 08:27:59.305426 containerd[1635]: time="2025-10-27T08:27:59.305411033Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 08:27:59.306381 containerd[1635]: time="2025-10-27T08:27:59.306358803Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 08:27:59.377348 tar[1631]: linux-amd64/README.md Oct 27 08:27:59.396385 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 08:27:59.427748 containerd[1635]: time="2025-10-27T08:27:59.427645483Z" level=info msg="Start subscribing containerd event" Oct 27 08:27:59.427886 containerd[1635]: time="2025-10-27T08:27:59.427756057Z" level=info msg="Start recovering state" Oct 27 08:27:59.428245 containerd[1635]: time="2025-10-27T08:27:59.428163466Z" level=info msg="Start event monitor" Oct 27 08:27:59.428245 containerd[1635]: time="2025-10-27T08:27:59.428220878Z" level=info msg="Start cni network conf syncer for default" Oct 27 08:27:59.428245 containerd[1635]: time="2025-10-27T08:27:59.428236854Z" level=info msg="Start streaming server" Oct 27 08:27:59.428245 containerd[1635]: time="2025-10-27T08:27:59.428247539Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 27 08:27:59.428612 containerd[1635]: time="2025-10-27T08:27:59.428257648Z" level=info msg="runtime interface starting up..." Oct 27 08:27:59.428612 containerd[1635]: time="2025-10-27T08:27:59.428272322Z" level=info msg="starting plugins..." Oct 27 08:27:59.428612 containerd[1635]: time="2025-10-27T08:27:59.428292984Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 27 08:27:59.428612 containerd[1635]: time="2025-10-27T08:27:59.428354102Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 08:27:59.428612 containerd[1635]: time="2025-10-27T08:27:59.428431874Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 08:27:59.428612 containerd[1635]: time="2025-10-27T08:27:59.428492416Z" level=info msg="containerd successfully booted in 0.144267s" Oct 27 08:27:59.428638 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 08:27:59.568220 systemd-networkd[1513]: eth0: Gained IPv6LL Oct 27 08:27:59.571262 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 08:27:59.573739 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 08:27:59.576987 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 27 08:27:59.580006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:27:59.582905 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 08:27:59.607748 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 27 08:27:59.608092 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 27 08:27:59.610700 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 08:27:59.613768 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 08:28:00.321008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:28:00.323681 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 08:28:00.325740 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:28:00.325974 systemd[1]: Startup finished in 3.029s (kernel) + 5.862s (initrd) + 4.209s (userspace) = 13.100s. Oct 27 08:28:00.753475 kubelet[1734]: E1027 08:28:00.753387 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:28:00.757879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:28:00.758131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:28:00.758538 systemd[1]: kubelet.service: Consumed 998ms CPU time, 267.7M memory peak. Oct 27 08:28:02.283159 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 08:28:02.284388 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:34214.service - OpenSSH per-connection server daemon (10.0.0.1:34214). Oct 27 08:28:02.370513 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 34214 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:28:02.372611 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:02.383034 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 08:28:02.384331 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 08:28:02.386210 systemd-logind[1609]: New session 1 of user core. Oct 27 08:28:02.417251 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 08:28:02.419690 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 08:28:02.437356 (systemd)[1753]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 08:28:02.439517 systemd-logind[1609]: New session c1 of user core. Oct 27 08:28:02.581739 systemd[1753]: Queued start job for default target default.target. Oct 27 08:28:02.596216 systemd[1753]: Created slice app.slice - User Application Slice. Oct 27 08:28:02.596243 systemd[1753]: Reached target paths.target - Paths. Oct 27 08:28:02.596286 systemd[1753]: Reached target timers.target - Timers. Oct 27 08:28:02.597784 systemd[1753]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 08:28:02.609134 systemd[1753]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 08:28:02.609259 systemd[1753]: Reached target sockets.target - Sockets. Oct 27 08:28:02.609302 systemd[1753]: Reached target basic.target - Basic System. Oct 27 08:28:02.609345 systemd[1753]: Reached target default.target - Main User Target. Oct 27 08:28:02.609378 systemd[1753]: Startup finished in 162ms. Oct 27 08:28:02.609817 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 08:28:02.611577 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 08:28:02.673668 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:34218.service - OpenSSH per-connection server daemon (10.0.0.1:34218). Oct 27 08:28:02.742087 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 34218 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:28:02.743379 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:02.747799 systemd-logind[1609]: New session 2 of user core. Oct 27 08:28:02.765111 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 08:28:02.819009 sshd[1767]: Connection closed by 10.0.0.1 port 34218 Oct 27 08:28:02.819456 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:02.833688 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:34218.service: Deactivated successfully. Oct 27 08:28:02.835565 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 08:28:02.836374 systemd-logind[1609]: Session 2 logged out. Waiting for processes to exit. Oct 27 08:28:02.839401 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:34220.service - OpenSSH per-connection server daemon (10.0.0.1:34220). Oct 27 08:28:02.840001 systemd-logind[1609]: Removed session 2. Oct 27 08:28:02.902520 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 34220 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:28:02.903768 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:02.908366 systemd-logind[1609]: New session 3 of user core. Oct 27 08:28:02.923098 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 08:28:02.972777 sshd[1776]: Connection closed by 10.0.0.1 port 34220 Oct 27 08:28:02.973071 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:02.993050 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:34220.service: Deactivated successfully. Oct 27 08:28:02.995146 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 08:28:02.995934 systemd-logind[1609]: Session 3 logged out. Waiting for processes to exit. Oct 27 08:28:02.998786 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:45154.service - OpenSSH per-connection server daemon (10.0.0.1:45154). Oct 27 08:28:02.999649 systemd-logind[1609]: Removed session 3. Oct 27 08:28:03.065843 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 45154 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:28:03.067312 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:03.072096 systemd-logind[1609]: New session 4 of user core. Oct 27 08:28:03.081078 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 08:28:03.136571 sshd[1785]: Connection closed by 10.0.0.1 port 45154 Oct 27 08:28:03.136914 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:03.150373 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:45154.service: Deactivated successfully. Oct 27 08:28:03.152465 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 08:28:03.153186 systemd-logind[1609]: Session 4 logged out. Waiting for processes to exit. Oct 27 08:28:03.156267 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:45168.service - OpenSSH per-connection server daemon (10.0.0.1:45168). Oct 27 08:28:03.156772 systemd-logind[1609]: Removed session 4. Oct 27 08:28:03.218648 sshd[1791]: Accepted publickey for core from 10.0.0.1 port 45168 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:28:03.220058 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:03.225014 systemd-logind[1609]: New session 5 of user core. Oct 27 08:28:03.239090 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 08:28:03.303435 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 08:28:03.303756 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:28:03.321187 sudo[1795]: pam_unix(sudo:session): session closed for user root Oct 27 08:28:03.323047 sshd[1794]: Connection closed by 10.0.0.1 port 45168 Oct 27 08:28:03.323380 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:03.332845 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:45168.service: Deactivated successfully. Oct 27 08:28:03.334731 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 08:28:03.335525 systemd-logind[1609]: Session 5 logged out. Waiting for processes to exit. Oct 27 08:28:03.338367 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:45184.service - OpenSSH per-connection server daemon (10.0.0.1:45184). Oct 27 08:28:03.338975 systemd-logind[1609]: Removed session 5. Oct 27 08:28:03.388248 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 45184 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:28:03.389413 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:03.393838 systemd-logind[1609]: New session 6 of user core. Oct 27 08:28:03.404080 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 08:28:03.459612 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 08:28:03.460017 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:28:03.585248 sudo[1807]: pam_unix(sudo:session): session closed for user root Oct 27 08:28:03.593582 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 08:28:03.593895 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:28:03.605440 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:28:03.655607 augenrules[1829]: No rules Oct 27 08:28:03.657839 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:28:03.658142 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:28:03.659408 sudo[1806]: pam_unix(sudo:session): session closed for user root Oct 27 08:28:03.661381 sshd[1805]: Connection closed by 10.0.0.1 port 45184 Oct 27 08:28:03.661725 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:03.669893 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:45184.service: Deactivated successfully. Oct 27 08:28:03.672915 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 08:28:03.674812 systemd-logind[1609]: Session 6 logged out. Waiting for processes to exit. Oct 27 08:28:03.677974 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:45194.service - OpenSSH per-connection server daemon (10.0.0.1:45194). Oct 27 08:28:03.679846 systemd-logind[1609]: Removed session 6. Oct 27 08:28:03.745427 sshd[1838]: Accepted publickey for core from 10.0.0.1 port 45194 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:28:03.747282 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:28:03.753142 systemd-logind[1609]: New session 7 of user core. Oct 27 08:28:03.764513 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 08:28:03.829843 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 08:28:03.830317 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:28:04.573312 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 08:28:04.604296 (dockerd)[1862]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 08:28:05.274521 dockerd[1862]: time="2025-10-27T08:28:05.274416639Z" level=info msg="Starting up" Oct 27 08:28:05.275517 dockerd[1862]: time="2025-10-27T08:28:05.275488375Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 27 08:28:05.298812 dockerd[1862]: time="2025-10-27T08:28:05.298741274Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 27 08:28:06.083730 dockerd[1862]: time="2025-10-27T08:28:06.083660322Z" level=info msg="Loading containers: start." Oct 27 08:28:06.096994 kernel: Initializing XFRM netlink socket Oct 27 08:28:06.411004 systemd-networkd[1513]: docker0: Link UP Oct 27 08:28:06.417993 dockerd[1862]: time="2025-10-27T08:28:06.417947507Z" level=info msg="Loading containers: done." Oct 27 08:28:06.434841 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2753257210-merged.mount: Deactivated successfully. Oct 27 08:28:06.435704 dockerd[1862]: time="2025-10-27T08:28:06.435658887Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 08:28:06.435766 dockerd[1862]: time="2025-10-27T08:28:06.435747733Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 27 08:28:06.435894 dockerd[1862]: time="2025-10-27T08:28:06.435862458Z" level=info msg="Initializing buildkit" Oct 27 08:28:06.468856 dockerd[1862]: time="2025-10-27T08:28:06.468776889Z" level=info msg="Completed buildkit initialization" Oct 27 08:28:06.475399 dockerd[1862]: time="2025-10-27T08:28:06.475364565Z" level=info msg="Daemon has completed initialization" Oct 27 08:28:06.475505 dockerd[1862]: time="2025-10-27T08:28:06.475433804Z" level=info msg="API listen on /run/docker.sock" Oct 27 08:28:06.475646 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 08:28:07.377961 containerd[1635]: time="2025-10-27T08:28:07.377885117Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 27 08:28:08.080037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954490931.mount: Deactivated successfully. Oct 27 08:28:09.499009 containerd[1635]: time="2025-10-27T08:28:09.498911351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:09.499780 containerd[1635]: time="2025-10-27T08:28:09.499733107Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 27 08:28:09.500824 containerd[1635]: time="2025-10-27T08:28:09.500784541Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:09.503576 containerd[1635]: time="2025-10-27T08:28:09.503516711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:09.504727 containerd[1635]: time="2025-10-27T08:28:09.504684378Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.126750526s" Oct 27 08:28:09.504761 containerd[1635]: time="2025-10-27T08:28:09.504743807Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 27 08:28:09.505840 containerd[1635]: time="2025-10-27T08:28:09.505804929Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 27 08:28:11.053504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 08:28:11.056010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:28:11.280099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:28:11.298253 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:28:11.798972 containerd[1635]: time="2025-10-27T08:28:11.798890969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:11.801033 containerd[1635]: time="2025-10-27T08:28:11.800686969Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 27 08:28:11.802373 containerd[1635]: time="2025-10-27T08:28:11.802336196Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:11.805891 containerd[1635]: time="2025-10-27T08:28:11.805849772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:11.807591 containerd[1635]: time="2025-10-27T08:28:11.807507697Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.301665939s" Oct 27 08:28:11.807591 containerd[1635]: time="2025-10-27T08:28:11.807561053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 27 08:28:11.811518 containerd[1635]: time="2025-10-27T08:28:11.811478606Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 27 08:28:11.913821 kubelet[2151]: E1027 08:28:11.913735 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:28:11.921560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:28:11.921768 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:28:11.922232 systemd[1]: kubelet.service: Consumed 435ms CPU time, 111.3M memory peak. Oct 27 08:28:13.607824 containerd[1635]: time="2025-10-27T08:28:13.607761616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:13.608640 containerd[1635]: time="2025-10-27T08:28:13.608621178Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 27 08:28:13.609734 containerd[1635]: time="2025-10-27T08:28:13.609702500Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:13.612210 containerd[1635]: time="2025-10-27T08:28:13.612169818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:13.613158 containerd[1635]: time="2025-10-27T08:28:13.613123399Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.801608344s" Oct 27 08:28:13.613205 containerd[1635]: time="2025-10-27T08:28:13.613156468Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 27 08:28:13.613812 containerd[1635]: time="2025-10-27T08:28:13.613645571Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 27 08:28:15.387305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount595798177.mount: Deactivated successfully. Oct 27 08:28:16.467623 containerd[1635]: time="2025-10-27T08:28:16.467539391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:16.468895 containerd[1635]: time="2025-10-27T08:28:16.468861583Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 27 08:28:16.470225 containerd[1635]: time="2025-10-27T08:28:16.470191420Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:16.472697 containerd[1635]: time="2025-10-27T08:28:16.472638802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:16.473677 containerd[1635]: time="2025-10-27T08:28:16.473634675Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.859959237s" Oct 27 08:28:16.473738 containerd[1635]: time="2025-10-27T08:28:16.473685575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 27 08:28:16.474297 containerd[1635]: time="2025-10-27T08:28:16.474167559Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 27 08:28:17.018781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount554648687.mount: Deactivated successfully. Oct 27 08:28:19.391152 containerd[1635]: time="2025-10-27T08:28:19.391084271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:19.392068 containerd[1635]: time="2025-10-27T08:28:19.391989531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 27 08:28:19.393265 containerd[1635]: time="2025-10-27T08:28:19.393212885Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:19.396307 containerd[1635]: time="2025-10-27T08:28:19.396266971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:19.397272 containerd[1635]: time="2025-10-27T08:28:19.397212465Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.923011136s" Oct 27 08:28:19.397272 containerd[1635]: time="2025-10-27T08:28:19.397243922Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 27 08:28:19.397814 containerd[1635]: time="2025-10-27T08:28:19.397775814Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 27 08:28:19.921239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32534099.mount: Deactivated successfully. Oct 27 08:28:19.928014 containerd[1635]: time="2025-10-27T08:28:19.927972525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:28:19.928754 containerd[1635]: time="2025-10-27T08:28:19.928729723Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 27 08:28:19.929853 containerd[1635]: time="2025-10-27T08:28:19.929815568Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:28:19.931763 containerd[1635]: time="2025-10-27T08:28:19.931703773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:28:19.932411 containerd[1635]: time="2025-10-27T08:28:19.932363526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 534.552569ms" Oct 27 08:28:19.932411 containerd[1635]: time="2025-10-27T08:28:19.932391771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 27 08:28:19.932844 containerd[1635]: time="2025-10-27T08:28:19.932817823Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 27 08:28:20.492817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714013316.mount: Deactivated successfully. Oct 27 08:28:22.172270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 27 08:28:22.174030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:28:22.830187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:28:22.834753 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:28:22.984050 kubelet[2290]: E1027 08:28:22.983914 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:28:22.990625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:28:22.990853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:28:22.991312 systemd[1]: kubelet.service: Consumed 321ms CPU time, 112.7M memory peak. Oct 27 08:28:23.111679 containerd[1635]: time="2025-10-27T08:28:23.111518478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:23.112585 containerd[1635]: time="2025-10-27T08:28:23.112551342Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 27 08:28:23.113784 containerd[1635]: time="2025-10-27T08:28:23.113729487Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:23.116288 containerd[1635]: time="2025-10-27T08:28:23.116249738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:23.117256 containerd[1635]: time="2025-10-27T08:28:23.117220571Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.184374227s" Oct 27 08:28:23.117256 containerd[1635]: time="2025-10-27T08:28:23.117253519Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 27 08:28:27.119898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:28:27.120148 systemd[1]: kubelet.service: Consumed 321ms CPU time, 112.7M memory peak. Oct 27 08:28:27.122413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:28:27.204777 systemd[1]: Reload requested from client PID 2333 ('systemctl') (unit session-7.scope)... Oct 27 08:28:27.204802 systemd[1]: Reloading... Oct 27 08:28:27.357080 zram_generator::config[2405]: No configuration found. Oct 27 08:28:27.952798 systemd[1]: Reloading finished in 747 ms. Oct 27 08:28:28.023826 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 08:28:28.023957 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 08:28:28.024282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:28:28.024330 systemd[1]: kubelet.service: Consumed 223ms CPU time, 98.2M memory peak. Oct 27 08:28:28.026107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:28:28.229355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:28:28.249195 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:28:28.375295 kubelet[2425]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:28:28.375295 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:28:28.375295 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:28:28.536448 kubelet[2425]: I1027 08:28:28.375309 2425 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:28:28.672480 kubelet[2425]: I1027 08:28:28.672421 2425 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 27 08:28:28.672480 kubelet[2425]: I1027 08:28:28.672453 2425 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:28:28.672714 kubelet[2425]: I1027 08:28:28.672690 2425 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:28:28.702977 kubelet[2425]: E1027 08:28:28.702905 2425 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 27 08:28:28.703246 kubelet[2425]: I1027 08:28:28.703222 2425 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:28:28.817367 kubelet[2425]: I1027 08:28:28.817229 2425 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:28:28.823369 kubelet[2425]: I1027 08:28:28.823334 2425 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 08:28:28.823624 kubelet[2425]: I1027 08:28:28.823583 2425 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:28:28.823807 kubelet[2425]: I1027 08:28:28.823612 2425 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:28:28.823807 kubelet[2425]: I1027 08:28:28.823804 2425 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:28:28.823994 kubelet[2425]: I1027 08:28:28.823815 2425 container_manager_linux.go:303] "Creating device plugin manager" Oct 27 08:28:28.824656 kubelet[2425]: I1027 08:28:28.824622 2425 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:28:28.826348 kubelet[2425]: I1027 08:28:28.826301 2425 kubelet.go:480] "Attempting to sync node with API server" Oct 27 08:28:28.826348 kubelet[2425]: I1027 08:28:28.826329 2425 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:28:28.826423 kubelet[2425]: I1027 08:28:28.826355 2425 kubelet.go:386] "Adding apiserver pod source" Oct 27 08:28:28.826423 kubelet[2425]: I1027 08:28:28.826367 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:28:28.831165 kubelet[2425]: I1027 08:28:28.831124 2425 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:28:28.831684 kubelet[2425]: I1027 08:28:28.831584 2425 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:28:28.832823 kubelet[2425]: W1027 08:28:28.832601 2425 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 08:28:28.835622 kubelet[2425]: E1027 08:28:28.834755 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:28:28.835622 kubelet[2425]: E1027 08:28:28.835367 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 08:28:28.836521 kubelet[2425]: I1027 08:28:28.836495 2425 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 08:28:28.836837 kubelet[2425]: I1027 08:28:28.836562 2425 server.go:1289] "Started kubelet" Oct 27 08:28:28.837620 kubelet[2425]: I1027 08:28:28.837567 2425 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:28:28.839283 kubelet[2425]: I1027 08:28:28.839263 2425 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:28:28.839426 kubelet[2425]: I1027 08:28:28.839269 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:28:28.839570 kubelet[2425]: I1027 08:28:28.839264 2425 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:28:28.840381 kubelet[2425]: I1027 08:28:28.840352 2425 server.go:317] "Adding debug handlers to kubelet server" Oct 27 08:28:28.840806 kubelet[2425]: I1027 08:28:28.840780 2425 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 08:28:28.840884 kubelet[2425]: I1027 08:28:28.840861 2425 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 08:28:28.840958 kubelet[2425]: I1027 08:28:28.840917 2425 reconciler.go:26] "Reconciler: start to sync state" Oct 27 08:28:28.841173 kubelet[2425]: E1027 08:28:28.840083 2425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.103:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18724bc3dbf903ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 08:28:28.836520876 +0000 UTC m=+0.580871181,LastTimestamp:2025-10-27 08:28:28.836520876 +0000 UTC m=+0.580871181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 08:28:28.841459 kubelet[2425]: E1027 08:28:28.841405 2425 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:28:28.841517 kubelet[2425]: E1027 08:28:28.841497 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="200ms" Oct 27 08:28:28.843461 kubelet[2425]: I1027 08:28:28.843425 2425 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:28:28.843682 kubelet[2425]: I1027 08:28:28.843648 2425 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:28:28.845970 kubelet[2425]: E1027 08:28:28.844928 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:28:28.846880 kubelet[2425]: E1027 08:28:28.846860 2425 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:28:28.849621 kubelet[2425]: I1027 08:28:28.849589 2425 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:28:28.849621 kubelet[2425]: I1027 08:28:28.849609 2425 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:28:28.862401 kubelet[2425]: I1027 08:28:28.862338 2425 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:28:28.862401 kubelet[2425]: I1027 08:28:28.862373 2425 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:28:28.862401 kubelet[2425]: I1027 08:28:28.862394 2425 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:28:28.864156 kubelet[2425]: I1027 08:28:28.864125 2425 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 27 08:28:28.866023 kubelet[2425]: I1027 08:28:28.865910 2425 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 27 08:28:28.866023 kubelet[2425]: I1027 08:28:28.865933 2425 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 27 08:28:28.866023 kubelet[2425]: I1027 08:28:28.865973 2425 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:28:28.866023 kubelet[2425]: I1027 08:28:28.865981 2425 kubelet.go:2436] "Starting kubelet main sync loop" Oct 27 08:28:28.866023 kubelet[2425]: E1027 08:28:28.866012 2425 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:28:28.866876 kubelet[2425]: E1027 08:28:28.866801 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 08:28:28.867881 kubelet[2425]: I1027 08:28:28.867853 2425 policy_none.go:49] "None policy: Start" Oct 27 08:28:28.867881 kubelet[2425]: I1027 08:28:28.867874 2425 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 08:28:28.867881 kubelet[2425]: I1027 08:28:28.867887 2425 state_mem.go:35] "Initializing new in-memory state store" Oct 27 08:28:28.874493 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 08:28:28.889179 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 08:28:28.892763 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 08:28:28.907795 kubelet[2425]: E1027 08:28:28.907771 2425 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:28:28.908113 kubelet[2425]: I1027 08:28:28.908001 2425 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:28:28.908113 kubelet[2425]: I1027 08:28:28.908014 2425 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:28:28.908242 kubelet[2425]: I1027 08:28:28.908186 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:28:28.909488 kubelet[2425]: E1027 08:28:28.909463 2425 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:28:28.909567 kubelet[2425]: E1027 08:28:28.909503 2425 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 27 08:28:28.977825 systemd[1]: Created slice kubepods-burstable-pod79fa92ed407b5e2d436f1a55b934b82b.slice - libcontainer container kubepods-burstable-pod79fa92ed407b5e2d436f1a55b934b82b.slice. Oct 27 08:28:28.998185 kubelet[2425]: E1027 08:28:28.998132 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:28:29.001992 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 27 08:28:29.009543 kubelet[2425]: I1027 08:28:29.009502 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:28:29.010096 kubelet[2425]: E1027 08:28:29.010040 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Oct 27 08:28:29.017479 kubelet[2425]: E1027 08:28:29.017429 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:28:29.021088 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 27 08:28:29.023682 kubelet[2425]: E1027 08:28:29.023641 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:28:29.042264 kubelet[2425]: E1027 08:28:29.042221 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="400ms" Oct 27 08:28:29.143033 kubelet[2425]: I1027 08:28:29.142862 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79fa92ed407b5e2d436f1a55b934b82b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"79fa92ed407b5e2d436f1a55b934b82b\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:29.143033 kubelet[2425]: I1027 08:28:29.142900 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79fa92ed407b5e2d436f1a55b934b82b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"79fa92ed407b5e2d436f1a55b934b82b\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:29.143033 kubelet[2425]: I1027 08:28:29.142918 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:29.143033 kubelet[2425]: I1027 08:28:29.142987 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:29.143246 kubelet[2425]: I1027 08:28:29.143042 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:29.143246 kubelet[2425]: I1027 08:28:29.143074 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 27 08:28:29.143246 kubelet[2425]: I1027 08:28:29.143094 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79fa92ed407b5e2d436f1a55b934b82b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"79fa92ed407b5e2d436f1a55b934b82b\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:29.143246 kubelet[2425]: I1027 08:28:29.143110 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:29.143246 kubelet[2425]: I1027 08:28:29.143131 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:29.211510 kubelet[2425]: I1027 08:28:29.211474 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:28:29.211972 kubelet[2425]: E1027 08:28:29.211911 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Oct 27 08:28:29.299616 kubelet[2425]: E1027 08:28:29.299578 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:29.300535 containerd[1635]: time="2025-10-27T08:28:29.300492766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:79fa92ed407b5e2d436f1a55b934b82b,Namespace:kube-system,Attempt:0,}" Oct 27 08:28:29.318738 kubelet[2425]: E1027 08:28:29.318707 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:29.319185 containerd[1635]: time="2025-10-27T08:28:29.319157617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 27 08:28:29.324774 kubelet[2425]: E1027 08:28:29.324540 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:29.325151 containerd[1635]: time="2025-10-27T08:28:29.325128418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 27 08:28:29.328296 containerd[1635]: time="2025-10-27T08:28:29.328249449Z" level=info msg="connecting to shim a1b4ce37d7fb7e8a415bef8197e22048ed5c2572272425cf26b7cb9d8f329649" address="unix:///run/containerd/s/08178359a70539ea43b7a5c75b1020bb57b203eeadb32ebd49d61a152022b252" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:28:29.352112 containerd[1635]: time="2025-10-27T08:28:29.352060967Z" level=info msg="connecting to shim 13abbb74149062b50f2230eb1f477211d377600de4f7f030c8311549a104179f" address="unix:///run/containerd/s/b0d401d68eabd6973996e9dea4906cba6374fa976622309560c6bcc5258061df" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:28:29.367129 systemd[1]: Started cri-containerd-a1b4ce37d7fb7e8a415bef8197e22048ed5c2572272425cf26b7cb9d8f329649.scope - libcontainer container a1b4ce37d7fb7e8a415bef8197e22048ed5c2572272425cf26b7cb9d8f329649. Oct 27 08:28:29.372150 containerd[1635]: time="2025-10-27T08:28:29.372110334Z" level=info msg="connecting to shim 13bee5461a722b7a29381fe597f3b76c3e677062b2b67f3ebb8191b8ded95cb0" address="unix:///run/containerd/s/fd3d6e1b8bccaec3f80636faab346bed453970056a4e69f62928861aa2493a6f" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:28:29.414281 systemd[1]: Started cri-containerd-13abbb74149062b50f2230eb1f477211d377600de4f7f030c8311549a104179f.scope - libcontainer container 13abbb74149062b50f2230eb1f477211d377600de4f7f030c8311549a104179f. Oct 27 08:28:29.420424 systemd[1]: Started cri-containerd-13bee5461a722b7a29381fe597f3b76c3e677062b2b67f3ebb8191b8ded95cb0.scope - libcontainer container 13bee5461a722b7a29381fe597f3b76c3e677062b2b67f3ebb8191b8ded95cb0. Oct 27 08:28:29.442744 kubelet[2425]: E1027 08:28:29.442695 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="800ms" Oct 27 08:28:29.460133 containerd[1635]: time="2025-10-27T08:28:29.459454183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:79fa92ed407b5e2d436f1a55b934b82b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1b4ce37d7fb7e8a415bef8197e22048ed5c2572272425cf26b7cb9d8f329649\"" Oct 27 08:28:29.460615 kubelet[2425]: E1027 08:28:29.460580 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:29.466456 containerd[1635]: time="2025-10-27T08:28:29.466425892Z" level=info msg="CreateContainer within sandbox \"a1b4ce37d7fb7e8a415bef8197e22048ed5c2572272425cf26b7cb9d8f329649\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 08:28:29.474038 containerd[1635]: time="2025-10-27T08:28:29.473998750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"13bee5461a722b7a29381fe597f3b76c3e677062b2b67f3ebb8191b8ded95cb0\"" Oct 27 08:28:29.474651 kubelet[2425]: E1027 08:28:29.474626 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:29.476521 containerd[1635]: time="2025-10-27T08:28:29.476491581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"13abbb74149062b50f2230eb1f477211d377600de4f7f030c8311549a104179f\"" Oct 27 08:28:29.477062 kubelet[2425]: E1027 08:28:29.477018 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:29.478700 containerd[1635]: time="2025-10-27T08:28:29.478673752Z" level=info msg="CreateContainer within sandbox \"13bee5461a722b7a29381fe597f3b76c3e677062b2b67f3ebb8191b8ded95cb0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 08:28:29.481951 containerd[1635]: time="2025-10-27T08:28:29.481883490Z" level=info msg="Container 3e3208c71c4a3a0f36a521f3166c930684335c11e1419721c0885c91a3851243: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:28:29.483026 containerd[1635]: time="2025-10-27T08:28:29.482995605Z" level=info msg="CreateContainer within sandbox \"13abbb74149062b50f2230eb1f477211d377600de4f7f030c8311549a104179f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 08:28:29.490033 containerd[1635]: time="2025-10-27T08:28:29.489983452Z" level=info msg="CreateContainer within sandbox \"a1b4ce37d7fb7e8a415bef8197e22048ed5c2572272425cf26b7cb9d8f329649\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e3208c71c4a3a0f36a521f3166c930684335c11e1419721c0885c91a3851243\"" Oct 27 08:28:29.490699 containerd[1635]: time="2025-10-27T08:28:29.490664282Z" level=info msg="StartContainer for \"3e3208c71c4a3a0f36a521f3166c930684335c11e1419721c0885c91a3851243\"" Oct 27 08:28:29.491157 containerd[1635]: time="2025-10-27T08:28:29.491090401Z" level=info msg="Container e6564088598b721032eca29e7b0af4ed39c5fbdff5b6aae700c35bf077465611: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:28:29.491867 containerd[1635]: time="2025-10-27T08:28:29.491823338Z" level=info msg="connecting to shim 3e3208c71c4a3a0f36a521f3166c930684335c11e1419721c0885c91a3851243" address="unix:///run/containerd/s/08178359a70539ea43b7a5c75b1020bb57b203eeadb32ebd49d61a152022b252" protocol=ttrpc version=3 Oct 27 08:28:29.501252 containerd[1635]: time="2025-10-27T08:28:29.501204024Z" level=info msg="Container 7b95ff030d668ec9ea6e5801f1c404d0b8fed649069490979b8a4979e585e654: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:28:29.503701 containerd[1635]: time="2025-10-27T08:28:29.503677095Z" level=info msg="CreateContainer within sandbox \"13bee5461a722b7a29381fe597f3b76c3e677062b2b67f3ebb8191b8ded95cb0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6564088598b721032eca29e7b0af4ed39c5fbdff5b6aae700c35bf077465611\"" Oct 27 08:28:29.504101 containerd[1635]: time="2025-10-27T08:28:29.504069492Z" level=info msg="StartContainer for \"e6564088598b721032eca29e7b0af4ed39c5fbdff5b6aae700c35bf077465611\"" Oct 27 08:28:29.505230 containerd[1635]: time="2025-10-27T08:28:29.505181796Z" level=info msg="connecting to shim e6564088598b721032eca29e7b0af4ed39c5fbdff5b6aae700c35bf077465611" address="unix:///run/containerd/s/fd3d6e1b8bccaec3f80636faab346bed453970056a4e69f62928861aa2493a6f" protocol=ttrpc version=3 Oct 27 08:28:29.511668 containerd[1635]: time="2025-10-27T08:28:29.511632209Z" level=info msg="CreateContainer within sandbox \"13abbb74149062b50f2230eb1f477211d377600de4f7f030c8311549a104179f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7b95ff030d668ec9ea6e5801f1c404d0b8fed649069490979b8a4979e585e654\"" Oct 27 08:28:29.518161 containerd[1635]: time="2025-10-27T08:28:29.518116262Z" level=info msg="StartContainer for \"7b95ff030d668ec9ea6e5801f1c404d0b8fed649069490979b8a4979e585e654\"" Oct 27 08:28:29.521538 containerd[1635]: time="2025-10-27T08:28:29.521217172Z" level=info msg="connecting to shim 7b95ff030d668ec9ea6e5801f1c404d0b8fed649069490979b8a4979e585e654" address="unix:///run/containerd/s/b0d401d68eabd6973996e9dea4906cba6374fa976622309560c6bcc5258061df" protocol=ttrpc version=3 Oct 27 08:28:29.614660 kubelet[2425]: I1027 08:28:29.614593 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:28:29.615056 kubelet[2425]: E1027 08:28:29.615030 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Oct 27 08:28:29.661247 systemd[1]: Started cri-containerd-3e3208c71c4a3a0f36a521f3166c930684335c11e1419721c0885c91a3851243.scope - libcontainer container 3e3208c71c4a3a0f36a521f3166c930684335c11e1419721c0885c91a3851243. Oct 27 08:28:29.678151 systemd[1]: Started cri-containerd-7b95ff030d668ec9ea6e5801f1c404d0b8fed649069490979b8a4979e585e654.scope - libcontainer container 7b95ff030d668ec9ea6e5801f1c404d0b8fed649069490979b8a4979e585e654. Oct 27 08:28:29.688315 systemd[1]: Started cri-containerd-e6564088598b721032eca29e7b0af4ed39c5fbdff5b6aae700c35bf077465611.scope - libcontainer container e6564088598b721032eca29e7b0af4ed39c5fbdff5b6aae700c35bf077465611. Oct 27 08:28:29.751641 containerd[1635]: time="2025-10-27T08:28:29.751573754Z" level=info msg="StartContainer for \"e6564088598b721032eca29e7b0af4ed39c5fbdff5b6aae700c35bf077465611\" returns successfully" Oct 27 08:28:29.754867 containerd[1635]: time="2025-10-27T08:28:29.754829612Z" level=info msg="StartContainer for \"3e3208c71c4a3a0f36a521f3166c930684335c11e1419721c0885c91a3851243\" returns successfully" Oct 27 08:28:29.763363 containerd[1635]: time="2025-10-27T08:28:29.763323225Z" level=info msg="StartContainer for \"7b95ff030d668ec9ea6e5801f1c404d0b8fed649069490979b8a4979e585e654\" returns successfully" Oct 27 08:28:29.764067 kubelet[2425]: E1027 08:28:29.764012 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 08:28:29.879678 kubelet[2425]: E1027 08:28:29.879140 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:28:29.879678 kubelet[2425]: E1027 08:28:29.879312 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:29.888097 kubelet[2425]: E1027 08:28:29.887806 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:28:29.888097 kubelet[2425]: E1027 08:28:29.888012 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:29.890336 kubelet[2425]: E1027 08:28:29.890123 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:28:29.890336 kubelet[2425]: E1027 08:28:29.890229 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:30.418757 kubelet[2425]: I1027 08:28:30.418622 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:28:30.892883 kubelet[2425]: E1027 08:28:30.892831 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:28:30.893320 kubelet[2425]: E1027 08:28:30.893257 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:28:30.893500 kubelet[2425]: E1027 08:28:30.893361 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:30.893918 kubelet[2425]: E1027 08:28:30.893890 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:31.412331 kubelet[2425]: E1027 08:28:31.412220 2425 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 27 08:28:31.512244 kubelet[2425]: I1027 08:28:31.512181 2425 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 08:28:31.512244 kubelet[2425]: E1027 08:28:31.512232 2425 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 27 08:28:31.543831 kubelet[2425]: I1027 08:28:31.543772 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:28:31.548918 kubelet[2425]: E1027 08:28:31.548712 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 08:28:31.548918 kubelet[2425]: I1027 08:28:31.548746 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:31.550219 kubelet[2425]: E1027 08:28:31.550190 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:31.550219 kubelet[2425]: I1027 08:28:31.550217 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:31.551615 kubelet[2425]: E1027 08:28:31.551583 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:31.832383 kubelet[2425]: I1027 08:28:31.832347 2425 apiserver.go:52] "Watching apiserver" Oct 27 08:28:31.841830 kubelet[2425]: I1027 08:28:31.841800 2425 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 08:28:31.893783 kubelet[2425]: I1027 08:28:31.892988 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:28:31.895960 kubelet[2425]: E1027 08:28:31.895915 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 08:28:31.896123 kubelet[2425]: E1027 08:28:31.896103 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:32.531846 kubelet[2425]: I1027 08:28:32.531804 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:32.923508 kubelet[2425]: E1027 08:28:32.923395 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:32.996393 kubelet[2425]: I1027 08:28:32.996363 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:33.619485 kubelet[2425]: E1027 08:28:33.619438 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:33.896040 kubelet[2425]: E1027 08:28:33.895896 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:33.896164 kubelet[2425]: E1027 08:28:33.896136 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:35.147022 kubelet[2425]: I1027 08:28:35.146969 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:28:35.218313 kubelet[2425]: E1027 08:28:35.218238 2425 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:35.240266 systemd[1]: Reload requested from client PID 2712 ('systemctl') (unit session-7.scope)... Oct 27 08:28:35.240286 systemd[1]: Reloading... Oct 27 08:28:35.336976 zram_generator::config[2756]: No configuration found. Oct 27 08:28:35.578066 systemd[1]: Reloading finished in 337 ms. Oct 27 08:28:35.616599 kubelet[2425]: I1027 08:28:35.616518 2425 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:28:35.616838 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:28:35.634720 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 08:28:35.635148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:28:35.635206 systemd[1]: kubelet.service: Consumed 1.163s CPU time, 129.4M memory peak. Oct 27 08:28:35.637297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:28:35.892677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:28:35.897710 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:28:35.942983 kubelet[2801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:28:35.942983 kubelet[2801]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:28:35.942983 kubelet[2801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:28:35.942983 kubelet[2801]: I1027 08:28:35.941916 2801 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:28:35.948225 kubelet[2801]: I1027 08:28:35.948194 2801 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 27 08:28:35.948225 kubelet[2801]: I1027 08:28:35.948217 2801 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:28:35.948421 kubelet[2801]: I1027 08:28:35.948401 2801 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:28:35.949765 kubelet[2801]: I1027 08:28:35.949741 2801 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 27 08:28:35.954565 kubelet[2801]: I1027 08:28:35.954249 2801 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:28:35.960992 kubelet[2801]: I1027 08:28:35.960141 2801 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:28:35.964924 kubelet[2801]: I1027 08:28:35.964886 2801 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 08:28:35.965176 kubelet[2801]: I1027 08:28:35.965142 2801 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:28:35.965342 kubelet[2801]: I1027 08:28:35.965167 2801 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:28:35.965432 kubelet[2801]: I1027 08:28:35.965346 2801 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:28:35.965432 kubelet[2801]: I1027 08:28:35.965356 2801 container_manager_linux.go:303] "Creating device plugin manager" Oct 27 08:28:35.965432 kubelet[2801]: I1027 08:28:35.965397 2801 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:28:35.965558 kubelet[2801]: I1027 08:28:35.965543 2801 kubelet.go:480] "Attempting to sync node with API server" Oct 27 08:28:35.965582 kubelet[2801]: I1027 08:28:35.965570 2801 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:28:35.965634 kubelet[2801]: I1027 08:28:35.965621 2801 kubelet.go:386] "Adding apiserver pod source" Oct 27 08:28:35.965666 kubelet[2801]: I1027 08:28:35.965642 2801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:28:35.969703 kubelet[2801]: I1027 08:28:35.969670 2801 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:28:35.970998 kubelet[2801]: I1027 08:28:35.970407 2801 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:28:35.976387 kubelet[2801]: I1027 08:28:35.976299 2801 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 08:28:35.976601 kubelet[2801]: I1027 08:28:35.976356 2801 server.go:1289] "Started kubelet" Oct 27 08:28:35.978142 kubelet[2801]: I1027 08:28:35.978099 2801 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:28:35.979245 kubelet[2801]: I1027 08:28:35.979200 2801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:28:35.980052 kubelet[2801]: I1027 08:28:35.979974 2801 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:28:35.983616 kubelet[2801]: I1027 08:28:35.983586 2801 server.go:317] "Adding debug handlers to kubelet server" Oct 27 08:28:35.986561 kubelet[2801]: I1027 08:28:35.986386 2801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:28:35.986561 kubelet[2801]: I1027 08:28:35.986483 2801 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:28:35.988272 kubelet[2801]: I1027 08:28:35.988252 2801 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 08:28:35.988595 kubelet[2801]: I1027 08:28:35.988329 2801 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 08:28:35.988819 kubelet[2801]: I1027 08:28:35.988780 2801 reconciler.go:26] "Reconciler: start to sync state" Oct 27 08:28:35.988819 kubelet[2801]: E1027 08:28:35.988778 2801 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:28:35.989723 kubelet[2801]: I1027 08:28:35.989654 2801 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:28:35.989848 kubelet[2801]: I1027 08:28:35.989830 2801 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:28:35.992348 kubelet[2801]: I1027 08:28:35.992309 2801 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:28:36.006093 kubelet[2801]: I1027 08:28:36.005881 2801 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 27 08:28:36.007881 kubelet[2801]: I1027 08:28:36.007764 2801 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 27 08:28:36.007881 kubelet[2801]: I1027 08:28:36.007782 2801 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 27 08:28:36.007881 kubelet[2801]: I1027 08:28:36.007803 2801 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:28:36.007881 kubelet[2801]: I1027 08:28:36.007809 2801 kubelet.go:2436] "Starting kubelet main sync loop" Oct 27 08:28:36.007881 kubelet[2801]: E1027 08:28:36.007876 2801 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:28:36.037741 kubelet[2801]: I1027 08:28:36.037367 2801 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:28:36.037741 kubelet[2801]: I1027 08:28:36.037389 2801 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:28:36.037741 kubelet[2801]: I1027 08:28:36.037407 2801 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:28:36.037741 kubelet[2801]: I1027 08:28:36.037526 2801 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 08:28:36.037741 kubelet[2801]: I1027 08:28:36.037539 2801 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 08:28:36.037741 kubelet[2801]: I1027 08:28:36.037554 2801 policy_none.go:49] "None policy: Start" Oct 27 08:28:36.037741 kubelet[2801]: I1027 08:28:36.037564 2801 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 08:28:36.037741 kubelet[2801]: I1027 08:28:36.037574 2801 state_mem.go:35] "Initializing new in-memory state store" Oct 27 08:28:36.037741 kubelet[2801]: I1027 08:28:36.037656 2801 state_mem.go:75] "Updated machine memory state" Oct 27 08:28:36.041859 kubelet[2801]: E1027 08:28:36.041827 2801 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:28:36.042221 kubelet[2801]: I1027 08:28:36.042065 2801 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:28:36.042221 kubelet[2801]: I1027 08:28:36.042084 2801 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:28:36.042515 kubelet[2801]: I1027 08:28:36.042492 2801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:28:36.046103 kubelet[2801]: E1027 08:28:36.043215 2801 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:28:36.108848 kubelet[2801]: I1027 08:28:36.108774 2801 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:36.109093 kubelet[2801]: I1027 08:28:36.109062 2801 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:36.109237 kubelet[2801]: I1027 08:28:36.109099 2801 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:28:36.149059 kubelet[2801]: I1027 08:28:36.148799 2801 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:28:36.289613 kubelet[2801]: I1027 08:28:36.289556 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79fa92ed407b5e2d436f1a55b934b82b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"79fa92ed407b5e2d436f1a55b934b82b\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:36.289613 kubelet[2801]: I1027 08:28:36.289610 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79fa92ed407b5e2d436f1a55b934b82b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"79fa92ed407b5e2d436f1a55b934b82b\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:36.289613 kubelet[2801]: I1027 08:28:36.289627 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:36.289851 kubelet[2801]: I1027 08:28:36.289644 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:36.289851 kubelet[2801]: I1027 08:28:36.289660 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:36.289851 kubelet[2801]: I1027 08:28:36.289694 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 27 08:28:36.289851 kubelet[2801]: I1027 08:28:36.289708 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79fa92ed407b5e2d436f1a55b934b82b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"79fa92ed407b5e2d436f1a55b934b82b\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:36.289851 kubelet[2801]: I1027 08:28:36.289723 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:36.290013 kubelet[2801]: I1027 08:28:36.289738 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:36.736706 kubelet[2801]: E1027 08:28:36.736422 2801 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:28:36.736706 kubelet[2801]: E1027 08:28:36.736623 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:36.736890 kubelet[2801]: E1027 08:28:36.736753 2801 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 08:28:36.737034 kubelet[2801]: E1027 08:28:36.736998 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:36.737123 kubelet[2801]: E1027 08:28:36.737000 2801 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:36.737211 kubelet[2801]: E1027 08:28:36.737177 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:36.738016 kubelet[2801]: I1027 08:28:36.737972 2801 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 27 08:28:36.738144 kubelet[2801]: I1027 08:28:36.738047 2801 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 08:28:36.967392 kubelet[2801]: I1027 08:28:36.967331 2801 apiserver.go:52] "Watching apiserver" Oct 27 08:28:36.989201 kubelet[2801]: I1027 08:28:36.989057 2801 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 08:28:37.017909 kubelet[2801]: I1027 08:28:37.017693 2801 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:37.017996 kubelet[2801]: E1027 08:28:37.017879 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:37.018236 kubelet[2801]: E1027 08:28:37.018196 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:37.458619 kubelet[2801]: E1027 08:28:37.458559 2801 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 08:28:37.458757 kubelet[2801]: E1027 08:28:37.458711 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:37.459517 kubelet[2801]: I1027 08:28:37.459440 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.459419218 podStartE2EDuration="5.459419218s" podCreationTimestamp="2025-10-27 08:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:28:37.457813634 +0000 UTC m=+1.554692718" watchObservedRunningTime="2025-10-27 08:28:37.459419218 +0000 UTC m=+1.556298302" Oct 27 08:28:37.913906 kubelet[2801]: I1027 08:28:37.913816 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.913803244 podStartE2EDuration="5.913803244s" podCreationTimestamp="2025-10-27 08:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:28:37.719672457 +0000 UTC m=+1.816551541" watchObservedRunningTime="2025-10-27 08:28:37.913803244 +0000 UTC m=+2.010682328" Oct 27 08:28:37.913906 kubelet[2801]: I1027 08:28:37.913893 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.913890297 podStartE2EDuration="2.913890297s" podCreationTimestamp="2025-10-27 08:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:28:37.910206651 +0000 UTC m=+2.007085735" watchObservedRunningTime="2025-10-27 08:28:37.913890297 +0000 UTC m=+2.010769381" Oct 27 08:28:38.019890 kubelet[2801]: E1027 08:28:38.019848 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:38.020498 kubelet[2801]: E1027 08:28:38.020464 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:39.021435 kubelet[2801]: E1027 08:28:39.021387 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:39.543960 kubelet[2801]: I1027 08:28:39.543906 2801 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 08:28:39.544320 containerd[1635]: time="2025-10-27T08:28:39.544247305Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 08:28:39.545071 kubelet[2801]: I1027 08:28:39.544450 2801 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 08:28:40.956929 systemd[1]: Created slice kubepods-besteffort-podaa81fe10_8583_48c3_b379_1cd863df626e.slice - libcontainer container kubepods-besteffort-podaa81fe10_8583_48c3_b379_1cd863df626e.slice. Oct 27 08:28:41.014930 kubelet[2801]: I1027 08:28:41.014880 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa81fe10-8583-48c3-b379-1cd863df626e-xtables-lock\") pod \"kube-proxy-r7glh\" (UID: \"aa81fe10-8583-48c3-b379-1cd863df626e\") " pod="kube-system/kube-proxy-r7glh" Oct 27 08:28:41.014930 kubelet[2801]: I1027 08:28:41.014915 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq24s\" (UniqueName: \"kubernetes.io/projected/aa81fe10-8583-48c3-b379-1cd863df626e-kube-api-access-wq24s\") pod \"kube-proxy-r7glh\" (UID: \"aa81fe10-8583-48c3-b379-1cd863df626e\") " pod="kube-system/kube-proxy-r7glh" Oct 27 08:28:41.016269 kubelet[2801]: I1027 08:28:41.015493 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b52ae66f-be75-4627-af32-051a3358fbc2-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jglg2\" (UID: \"b52ae66f-be75-4627-af32-051a3358fbc2\") " pod="tigera-operator/tigera-operator-7dcd859c48-jglg2" Oct 27 08:28:41.016269 kubelet[2801]: I1027 08:28:41.015517 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa81fe10-8583-48c3-b379-1cd863df626e-kube-proxy\") pod \"kube-proxy-r7glh\" (UID: \"aa81fe10-8583-48c3-b379-1cd863df626e\") " pod="kube-system/kube-proxy-r7glh" Oct 27 08:28:41.016269 kubelet[2801]: I1027 08:28:41.015530 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa81fe10-8583-48c3-b379-1cd863df626e-lib-modules\") pod \"kube-proxy-r7glh\" (UID: \"aa81fe10-8583-48c3-b379-1cd863df626e\") " pod="kube-system/kube-proxy-r7glh" Oct 27 08:28:41.016269 kubelet[2801]: I1027 08:28:41.015567 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qclrn\" (UniqueName: \"kubernetes.io/projected/b52ae66f-be75-4627-af32-051a3358fbc2-kube-api-access-qclrn\") pod \"tigera-operator-7dcd859c48-jglg2\" (UID: \"b52ae66f-be75-4627-af32-051a3358fbc2\") " pod="tigera-operator/tigera-operator-7dcd859c48-jglg2" Oct 27 08:28:41.020618 systemd[1]: Created slice kubepods-besteffort-podb52ae66f_be75_4627_af32_051a3358fbc2.slice - libcontainer container kubepods-besteffort-podb52ae66f_be75_4627_af32_051a3358fbc2.slice. Oct 27 08:28:41.270100 kubelet[2801]: E1027 08:28:41.270053 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:41.270576 containerd[1635]: time="2025-10-27T08:28:41.270535079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r7glh,Uid:aa81fe10-8583-48c3-b379-1cd863df626e,Namespace:kube-system,Attempt:0,}" Oct 27 08:28:41.310709 containerd[1635]: time="2025-10-27T08:28:41.310630934Z" level=info msg="connecting to shim 472465239df6d41d8f547b484d667a0b1ee57c9c8671e2e8839ac9d373f04d4b" address="unix:///run/containerd/s/3db9106527cf5de109aee06bec97fb4348bb49755a2d285403f5d18f81685dd9" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:28:41.323869 containerd[1635]: time="2025-10-27T08:28:41.323823457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jglg2,Uid:b52ae66f-be75-4627-af32-051a3358fbc2,Namespace:tigera-operator,Attempt:0,}" Oct 27 08:28:41.376079 systemd[1]: Started cri-containerd-472465239df6d41d8f547b484d667a0b1ee57c9c8671e2e8839ac9d373f04d4b.scope - libcontainer container 472465239df6d41d8f547b484d667a0b1ee57c9c8671e2e8839ac9d373f04d4b. Oct 27 08:28:41.402108 kubelet[2801]: E1027 08:28:41.402061 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:41.608924 containerd[1635]: time="2025-10-27T08:28:41.608769714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r7glh,Uid:aa81fe10-8583-48c3-b379-1cd863df626e,Namespace:kube-system,Attempt:0,} returns sandbox id \"472465239df6d41d8f547b484d667a0b1ee57c9c8671e2e8839ac9d373f04d4b\"" Oct 27 08:28:41.609539 kubelet[2801]: E1027 08:28:41.609510 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:41.839128 containerd[1635]: time="2025-10-27T08:28:41.839069415Z" level=info msg="CreateContainer within sandbox \"472465239df6d41d8f547b484d667a0b1ee57c9c8671e2e8839ac9d373f04d4b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 08:28:42.027993 kubelet[2801]: E1027 08:28:42.027930 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:42.307849 containerd[1635]: time="2025-10-27T08:28:42.307724269Z" level=info msg="Container 02ea209ad9d5cd3fd21e1ff0f27eb77e72837b04c1cd3d5adf3a614480bfcba2: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:28:42.333578 containerd[1635]: time="2025-10-27T08:28:42.333517407Z" level=info msg="CreateContainer within sandbox \"472465239df6d41d8f547b484d667a0b1ee57c9c8671e2e8839ac9d373f04d4b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02ea209ad9d5cd3fd21e1ff0f27eb77e72837b04c1cd3d5adf3a614480bfcba2\"" Oct 27 08:28:42.334087 containerd[1635]: time="2025-10-27T08:28:42.334057788Z" level=info msg="StartContainer for \"02ea209ad9d5cd3fd21e1ff0f27eb77e72837b04c1cd3d5adf3a614480bfcba2\"" Oct 27 08:28:42.336598 containerd[1635]: time="2025-10-27T08:28:42.336560205Z" level=info msg="connecting to shim 02ea209ad9d5cd3fd21e1ff0f27eb77e72837b04c1cd3d5adf3a614480bfcba2" address="unix:///run/containerd/s/3db9106527cf5de109aee06bec97fb4348bb49755a2d285403f5d18f81685dd9" protocol=ttrpc version=3 Oct 27 08:28:42.345439 containerd[1635]: time="2025-10-27T08:28:42.345174961Z" level=info msg="connecting to shim 44f82ca8daac1ec028f5d23140ebb842021277154e10ec1949a2db05aaf033fc" address="unix:///run/containerd/s/7bdec453f6d2450f988773c2e36d29c1bd7bca719c05bc7268ce59890bbf9a84" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:28:42.362095 systemd[1]: Started cri-containerd-02ea209ad9d5cd3fd21e1ff0f27eb77e72837b04c1cd3d5adf3a614480bfcba2.scope - libcontainer container 02ea209ad9d5cd3fd21e1ff0f27eb77e72837b04c1cd3d5adf3a614480bfcba2. Oct 27 08:28:42.381091 systemd[1]: Started cri-containerd-44f82ca8daac1ec028f5d23140ebb842021277154e10ec1949a2db05aaf033fc.scope - libcontainer container 44f82ca8daac1ec028f5d23140ebb842021277154e10ec1949a2db05aaf033fc. Oct 27 08:28:42.810991 containerd[1635]: time="2025-10-27T08:28:42.810906837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jglg2,Uid:b52ae66f-be75-4627-af32-051a3358fbc2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"44f82ca8daac1ec028f5d23140ebb842021277154e10ec1949a2db05aaf033fc\"" Oct 27 08:28:42.811580 containerd[1635]: time="2025-10-27T08:28:42.811557962Z" level=info msg="StartContainer for \"02ea209ad9d5cd3fd21e1ff0f27eb77e72837b04c1cd3d5adf3a614480bfcba2\" returns successfully" Oct 27 08:28:42.812579 containerd[1635]: time="2025-10-27T08:28:42.812537346Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 27 08:28:43.031431 kubelet[2801]: E1027 08:28:43.031391 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:44.034162 kubelet[2801]: E1027 08:28:44.034108 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:44.226284 update_engine[1611]: I20251027 08:28:44.226148 1611 update_attempter.cc:509] Updating boot flags... Oct 27 08:28:45.162164 kubelet[2801]: E1027 08:28:45.162050 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:45.176808 kubelet[2801]: I1027 08:28:45.176735 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r7glh" podStartSLOduration=5.176715582 podStartE2EDuration="5.176715582s" podCreationTimestamp="2025-10-27 08:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:28:43.040360645 +0000 UTC m=+7.137239729" watchObservedRunningTime="2025-10-27 08:28:45.176715582 +0000 UTC m=+9.273594666" Oct 27 08:28:45.638171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223665341.mount: Deactivated successfully. Oct 27 08:28:46.039022 kubelet[2801]: E1027 08:28:46.038974 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:46.182547 containerd[1635]: time="2025-10-27T08:28:46.182459602Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:46.183396 containerd[1635]: time="2025-10-27T08:28:46.183355997Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 27 08:28:46.184842 containerd[1635]: time="2025-10-27T08:28:46.184790789Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:46.187301 containerd[1635]: time="2025-10-27T08:28:46.187238318Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:46.187713 containerd[1635]: time="2025-10-27T08:28:46.187681039Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.375106196s" Oct 27 08:28:46.187713 containerd[1635]: time="2025-10-27T08:28:46.187710535Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 27 08:28:46.193399 containerd[1635]: time="2025-10-27T08:28:46.193338670Z" level=info msg="CreateContainer within sandbox \"44f82ca8daac1ec028f5d23140ebb842021277154e10ec1949a2db05aaf033fc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 27 08:28:46.202550 containerd[1635]: time="2025-10-27T08:28:46.202488551Z" level=info msg="Container 5cc188309630ae2298a48e1731f3974cff60cd9717d71e3d91bd737e7e2ef49e: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:28:46.209698 containerd[1635]: time="2025-10-27T08:28:46.209659241Z" level=info msg="CreateContainer within sandbox \"44f82ca8daac1ec028f5d23140ebb842021277154e10ec1949a2db05aaf033fc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5cc188309630ae2298a48e1731f3974cff60cd9717d71e3d91bd737e7e2ef49e\"" Oct 27 08:28:46.210372 containerd[1635]: time="2025-10-27T08:28:46.210319112Z" level=info msg="StartContainer for \"5cc188309630ae2298a48e1731f3974cff60cd9717d71e3d91bd737e7e2ef49e\"" Oct 27 08:28:46.211486 containerd[1635]: time="2025-10-27T08:28:46.211449085Z" level=info msg="connecting to shim 5cc188309630ae2298a48e1731f3974cff60cd9717d71e3d91bd737e7e2ef49e" address="unix:///run/containerd/s/7bdec453f6d2450f988773c2e36d29c1bd7bca719c05bc7268ce59890bbf9a84" protocol=ttrpc version=3 Oct 27 08:28:46.234091 systemd[1]: Started cri-containerd-5cc188309630ae2298a48e1731f3974cff60cd9717d71e3d91bd737e7e2ef49e.scope - libcontainer container 5cc188309630ae2298a48e1731f3974cff60cd9717d71e3d91bd737e7e2ef49e. Oct 27 08:28:46.267535 containerd[1635]: time="2025-10-27T08:28:46.267487955Z" level=info msg="StartContainer for \"5cc188309630ae2298a48e1731f3974cff60cd9717d71e3d91bd737e7e2ef49e\" returns successfully" Oct 27 08:28:47.049418 kubelet[2801]: I1027 08:28:47.049331 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jglg2" podStartSLOduration=3.673121524 podStartE2EDuration="7.049311781s" podCreationTimestamp="2025-10-27 08:28:40 +0000 UTC" firstStartedPulling="2025-10-27 08:28:42.812280546 +0000 UTC m=+6.909159620" lastFinishedPulling="2025-10-27 08:28:46.188470803 +0000 UTC m=+10.285349877" observedRunningTime="2025-10-27 08:28:47.048723236 +0000 UTC m=+11.145602320" watchObservedRunningTime="2025-10-27 08:28:47.049311781 +0000 UTC m=+11.146190865" Oct 27 08:28:48.058438 kubelet[2801]: E1027 08:28:48.058389 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:49.047235 kubelet[2801]: E1027 08:28:49.045603 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:51.930004 sudo[1842]: pam_unix(sudo:session): session closed for user root Oct 27 08:28:51.931716 sshd[1841]: Connection closed by 10.0.0.1 port 45194 Oct 27 08:28:51.932836 sshd-session[1838]: pam_unix(sshd:session): session closed for user core Oct 27 08:28:51.938032 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:45194.service: Deactivated successfully. Oct 27 08:28:51.941604 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 08:28:51.941834 systemd[1]: session-7.scope: Consumed 6.436s CPU time, 214.8M memory peak. Oct 27 08:28:51.943352 systemd-logind[1609]: Session 7 logged out. Waiting for processes to exit. Oct 27 08:28:51.944884 systemd-logind[1609]: Removed session 7. Oct 27 08:28:55.943434 systemd[1]: Created slice kubepods-besteffort-podc662dbb7_417a_43b4_a262_57d6fb3f1935.slice - libcontainer container kubepods-besteffort-podc662dbb7_417a_43b4_a262_57d6fb3f1935.slice. Oct 27 08:28:56.110076 kubelet[2801]: I1027 08:28:56.110014 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c662dbb7-417a-43b4-a262-57d6fb3f1935-tigera-ca-bundle\") pod \"calico-typha-86dc78f9f6-xtr9f\" (UID: \"c662dbb7-417a-43b4-a262-57d6fb3f1935\") " pod="calico-system/calico-typha-86dc78f9f6-xtr9f" Oct 27 08:28:56.110076 kubelet[2801]: I1027 08:28:56.110071 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c662dbb7-417a-43b4-a262-57d6fb3f1935-typha-certs\") pod \"calico-typha-86dc78f9f6-xtr9f\" (UID: \"c662dbb7-417a-43b4-a262-57d6fb3f1935\") " pod="calico-system/calico-typha-86dc78f9f6-xtr9f" Oct 27 08:28:56.110654 kubelet[2801]: I1027 08:28:56.110103 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncm72\" (UniqueName: \"kubernetes.io/projected/c662dbb7-417a-43b4-a262-57d6fb3f1935-kube-api-access-ncm72\") pod \"calico-typha-86dc78f9f6-xtr9f\" (UID: \"c662dbb7-417a-43b4-a262-57d6fb3f1935\") " pod="calico-system/calico-typha-86dc78f9f6-xtr9f" Oct 27 08:28:56.140433 systemd[1]: Created slice kubepods-besteffort-poda561ddb4_60ff_4a07_8dea_d7e11ab7881b.slice - libcontainer container kubepods-besteffort-poda561ddb4_60ff_4a07_8dea_d7e11ab7881b.slice. Oct 27 08:28:56.210694 kubelet[2801]: I1027 08:28:56.210516 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-node-certs\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.210694 kubelet[2801]: I1027 08:28:56.210571 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-policysync\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.210694 kubelet[2801]: I1027 08:28:56.210587 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-tigera-ca-bundle\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.210694 kubelet[2801]: I1027 08:28:56.210610 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-xtables-lock\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.210973 kubelet[2801]: I1027 08:28:56.210710 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-var-run-calico\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.210973 kubelet[2801]: I1027 08:28:56.210737 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42t9w\" (UniqueName: \"kubernetes.io/projected/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-kube-api-access-42t9w\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.210973 kubelet[2801]: I1027 08:28:56.210754 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-cni-net-dir\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.210973 kubelet[2801]: I1027 08:28:56.210769 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-flexvol-driver-host\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.210973 kubelet[2801]: I1027 08:28:56.210784 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-var-lib-calico\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.211101 kubelet[2801]: I1027 08:28:56.210802 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-cni-bin-dir\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.211101 kubelet[2801]: I1027 08:28:56.210816 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-lib-modules\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.211101 kubelet[2801]: I1027 08:28:56.210858 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a561ddb4-60ff-4a07-8dea-d7e11ab7881b-cni-log-dir\") pod \"calico-node-x6h44\" (UID: \"a561ddb4-60ff-4a07-8dea-d7e11ab7881b\") " pod="calico-system/calico-node-x6h44" Oct 27 08:28:56.262023 kubelet[2801]: E1027 08:28:56.261970 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:56.263013 containerd[1635]: time="2025-10-27T08:28:56.262975726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86dc78f9f6-xtr9f,Uid:c662dbb7-417a-43b4-a262-57d6fb3f1935,Namespace:calico-system,Attempt:0,}" Oct 27 08:28:56.287271 containerd[1635]: time="2025-10-27T08:28:56.287049956Z" level=info msg="connecting to shim 3a5879d5c5b53c5ff0f858e681c1d4353fc93825fc8501da0ce0fc61ffdc08ef" address="unix:///run/containerd/s/c697a9ebaf7fd10ee2fde45d8a3b917262a4662688c60cae373ac245f335b04c" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:28:56.326994 kubelet[2801]: E1027 08:28:56.326550 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:28:56.327191 kubelet[2801]: E1027 08:28:56.327158 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.328116 kubelet[2801]: W1027 08:28:56.327494 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.328116 kubelet[2801]: E1027 08:28:56.327525 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.328116 kubelet[2801]: E1027 08:28:56.327920 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.328116 kubelet[2801]: W1027 08:28:56.327930 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.328116 kubelet[2801]: E1027 08:28:56.327977 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.328489 kubelet[2801]: E1027 08:28:56.328337 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.328489 kubelet[2801]: W1027 08:28:56.328346 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.329739 kubelet[2801]: E1027 08:28:56.329664 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.336167 systemd[1]: Started cri-containerd-3a5879d5c5b53c5ff0f858e681c1d4353fc93825fc8501da0ce0fc61ffdc08ef.scope - libcontainer container 3a5879d5c5b53c5ff0f858e681c1d4353fc93825fc8501da0ce0fc61ffdc08ef. Oct 27 08:28:56.339547 kubelet[2801]: E1027 08:28:56.339477 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.339547 kubelet[2801]: W1027 08:28:56.339496 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.339547 kubelet[2801]: E1027 08:28:56.339515 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.412283 kubelet[2801]: E1027 08:28:56.412239 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.412283 kubelet[2801]: W1027 08:28:56.412266 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.412283 kubelet[2801]: E1027 08:28:56.412288 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.412538 kubelet[2801]: E1027 08:28:56.412508 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.412538 kubelet[2801]: W1027 08:28:56.412519 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.412538 kubelet[2801]: E1027 08:28:56.412528 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.412765 kubelet[2801]: E1027 08:28:56.412750 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.412765 kubelet[2801]: W1027 08:28:56.412760 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.412813 kubelet[2801]: E1027 08:28:56.412768 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.413007 kubelet[2801]: E1027 08:28:56.412991 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.413007 kubelet[2801]: W1027 08:28:56.413002 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.413086 kubelet[2801]: E1027 08:28:56.413010 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.413218 kubelet[2801]: E1027 08:28:56.413195 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.413218 kubelet[2801]: W1027 08:28:56.413209 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.413218 kubelet[2801]: E1027 08:28:56.413217 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.413399 kubelet[2801]: E1027 08:28:56.413379 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.413399 kubelet[2801]: W1027 08:28:56.413390 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.413399 kubelet[2801]: E1027 08:28:56.413397 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.413561 kubelet[2801]: E1027 08:28:56.413547 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.413561 kubelet[2801]: W1027 08:28:56.413557 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.413609 kubelet[2801]: E1027 08:28:56.413565 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.413729 kubelet[2801]: E1027 08:28:56.413715 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.413729 kubelet[2801]: W1027 08:28:56.413725 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.413778 kubelet[2801]: E1027 08:28:56.413732 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.413911 kubelet[2801]: E1027 08:28:56.413896 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.413911 kubelet[2801]: W1027 08:28:56.413906 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.413988 kubelet[2801]: E1027 08:28:56.413914 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.414096 kubelet[2801]: E1027 08:28:56.414082 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.414096 kubelet[2801]: W1027 08:28:56.414092 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.414147 kubelet[2801]: E1027 08:28:56.414100 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.414259 kubelet[2801]: E1027 08:28:56.414245 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.414259 kubelet[2801]: W1027 08:28:56.414255 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.414299 kubelet[2801]: E1027 08:28:56.414263 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.414422 kubelet[2801]: E1027 08:28:56.414409 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.414422 kubelet[2801]: W1027 08:28:56.414418 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.414468 kubelet[2801]: E1027 08:28:56.414425 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.414595 kubelet[2801]: E1027 08:28:56.414582 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.414595 kubelet[2801]: W1027 08:28:56.414591 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.414639 kubelet[2801]: E1027 08:28:56.414599 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.414756 kubelet[2801]: E1027 08:28:56.414742 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.414756 kubelet[2801]: W1027 08:28:56.414752 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.414808 kubelet[2801]: E1027 08:28:56.414760 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.414922 kubelet[2801]: E1027 08:28:56.414909 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.414922 kubelet[2801]: W1027 08:28:56.414918 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.415012 kubelet[2801]: E1027 08:28:56.414926 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.415119 kubelet[2801]: E1027 08:28:56.415104 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.415119 kubelet[2801]: W1027 08:28:56.415116 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.415169 kubelet[2801]: E1027 08:28:56.415123 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.415312 kubelet[2801]: E1027 08:28:56.415297 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.415312 kubelet[2801]: W1027 08:28:56.415307 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.415365 kubelet[2801]: E1027 08:28:56.415315 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.415479 kubelet[2801]: E1027 08:28:56.415465 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.415479 kubelet[2801]: W1027 08:28:56.415475 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.415528 kubelet[2801]: E1027 08:28:56.415483 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.415642 kubelet[2801]: E1027 08:28:56.415629 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.415642 kubelet[2801]: W1027 08:28:56.415638 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.415687 kubelet[2801]: E1027 08:28:56.415645 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.415804 kubelet[2801]: E1027 08:28:56.415791 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.415804 kubelet[2801]: W1027 08:28:56.415800 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.415856 kubelet[2801]: E1027 08:28:56.415807 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.416125 kubelet[2801]: E1027 08:28:56.416109 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.416125 kubelet[2801]: W1027 08:28:56.416120 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.416188 kubelet[2801]: E1027 08:28:56.416129 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.416188 kubelet[2801]: I1027 08:28:56.416156 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c11176f9-c15d-4aff-9a2f-9db19f9df938-kubelet-dir\") pod \"csi-node-driver-6585l\" (UID: \"c11176f9-c15d-4aff-9a2f-9db19f9df938\") " pod="calico-system/csi-node-driver-6585l" Oct 27 08:28:56.416421 kubelet[2801]: E1027 08:28:56.416387 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.416454 kubelet[2801]: W1027 08:28:56.416420 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.416454 kubelet[2801]: E1027 08:28:56.416448 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.416512 kubelet[2801]: I1027 08:28:56.416493 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prh88\" (UniqueName: \"kubernetes.io/projected/c11176f9-c15d-4aff-9a2f-9db19f9df938-kube-api-access-prh88\") pod \"csi-node-driver-6585l\" (UID: \"c11176f9-c15d-4aff-9a2f-9db19f9df938\") " pod="calico-system/csi-node-driver-6585l" Oct 27 08:28:56.416735 kubelet[2801]: E1027 08:28:56.416715 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.416735 kubelet[2801]: W1027 08:28:56.416733 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.416800 kubelet[2801]: E1027 08:28:56.416748 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.416991 kubelet[2801]: E1027 08:28:56.416974 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.416991 kubelet[2801]: W1027 08:28:56.416987 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.417051 kubelet[2801]: E1027 08:28:56.416998 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.417271 kubelet[2801]: E1027 08:28:56.417245 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.417271 kubelet[2801]: W1027 08:28:56.417260 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.417325 kubelet[2801]: E1027 08:28:56.417271 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.417487 kubelet[2801]: E1027 08:28:56.417470 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.417487 kubelet[2801]: W1027 08:28:56.417484 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.417533 kubelet[2801]: E1027 08:28:56.417495 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.417707 kubelet[2801]: E1027 08:28:56.417691 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.417707 kubelet[2801]: W1027 08:28:56.417704 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.417756 kubelet[2801]: E1027 08:28:56.417713 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.417756 kubelet[2801]: I1027 08:28:56.417739 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c11176f9-c15d-4aff-9a2f-9db19f9df938-registration-dir\") pod \"csi-node-driver-6585l\" (UID: \"c11176f9-c15d-4aff-9a2f-9db19f9df938\") " pod="calico-system/csi-node-driver-6585l" Oct 27 08:28:56.418083 kubelet[2801]: E1027 08:28:56.418043 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.418083 kubelet[2801]: W1027 08:28:56.418074 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.418218 kubelet[2801]: E1027 08:28:56.418101 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.418218 kubelet[2801]: I1027 08:28:56.418142 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c11176f9-c15d-4aff-9a2f-9db19f9df938-varrun\") pod \"csi-node-driver-6585l\" (UID: \"c11176f9-c15d-4aff-9a2f-9db19f9df938\") " pod="calico-system/csi-node-driver-6585l" Oct 27 08:28:56.418355 kubelet[2801]: E1027 08:28:56.418323 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.418355 kubelet[2801]: W1027 08:28:56.418349 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.418433 kubelet[2801]: E1027 08:28:56.418360 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.418543 kubelet[2801]: E1027 08:28:56.418529 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.418543 kubelet[2801]: W1027 08:28:56.418539 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.418589 kubelet[2801]: E1027 08:28:56.418546 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.418711 kubelet[2801]: E1027 08:28:56.418696 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.418711 kubelet[2801]: W1027 08:28:56.418706 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.418756 kubelet[2801]: E1027 08:28:56.418713 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.418756 kubelet[2801]: I1027 08:28:56.418731 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c11176f9-c15d-4aff-9a2f-9db19f9df938-socket-dir\") pod \"csi-node-driver-6585l\" (UID: \"c11176f9-c15d-4aff-9a2f-9db19f9df938\") " pod="calico-system/csi-node-driver-6585l" Oct 27 08:28:56.418911 kubelet[2801]: E1027 08:28:56.418893 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.418911 kubelet[2801]: W1027 08:28:56.418907 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.418991 kubelet[2801]: E1027 08:28:56.418917 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.419113 kubelet[2801]: E1027 08:28:56.419097 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.419113 kubelet[2801]: W1027 08:28:56.419108 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.419170 kubelet[2801]: E1027 08:28:56.419117 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.419308 kubelet[2801]: E1027 08:28:56.419292 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.419308 kubelet[2801]: W1027 08:28:56.419304 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.419356 kubelet[2801]: E1027 08:28:56.419312 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.419492 kubelet[2801]: E1027 08:28:56.419477 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.419492 kubelet[2801]: W1027 08:28:56.419487 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.419559 kubelet[2801]: E1027 08:28:56.419495 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.443732 kubelet[2801]: E1027 08:28:56.443668 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:56.444412 containerd[1635]: time="2025-10-27T08:28:56.444342196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x6h44,Uid:a561ddb4-60ff-4a07-8dea-d7e11ab7881b,Namespace:calico-system,Attempt:0,}" Oct 27 08:28:56.520171 kubelet[2801]: E1027 08:28:56.520110 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.520171 kubelet[2801]: W1027 08:28:56.520143 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.520171 kubelet[2801]: E1027 08:28:56.520170 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.520462 kubelet[2801]: E1027 08:28:56.520436 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.520462 kubelet[2801]: W1027 08:28:56.520452 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.520520 kubelet[2801]: E1027 08:28:56.520465 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.520733 kubelet[2801]: E1027 08:28:56.520699 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.520733 kubelet[2801]: W1027 08:28:56.520715 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.520733 kubelet[2801]: E1027 08:28:56.520726 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.521172 kubelet[2801]: E1027 08:28:56.521130 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.521172 kubelet[2801]: W1027 08:28:56.521160 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.521251 kubelet[2801]: E1027 08:28:56.521185 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.521385 kubelet[2801]: E1027 08:28:56.521363 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.521385 kubelet[2801]: W1027 08:28:56.521373 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.521385 kubelet[2801]: E1027 08:28:56.521382 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.521594 kubelet[2801]: E1027 08:28:56.521571 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.521594 kubelet[2801]: W1027 08:28:56.521582 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.521594 kubelet[2801]: E1027 08:28:56.521590 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.521896 kubelet[2801]: E1027 08:28:56.521860 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.521896 kubelet[2801]: W1027 08:28:56.521880 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.521896 kubelet[2801]: E1027 08:28:56.521895 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.522239 kubelet[2801]: E1027 08:28:56.522210 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.522269 kubelet[2801]: W1027 08:28:56.522238 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.522292 kubelet[2801]: E1027 08:28:56.522266 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.522556 kubelet[2801]: E1027 08:28:56.522538 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.522556 kubelet[2801]: W1027 08:28:56.522553 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.522609 kubelet[2801]: E1027 08:28:56.522565 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.522811 kubelet[2801]: E1027 08:28:56.522793 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.522811 kubelet[2801]: W1027 08:28:56.522808 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.522860 kubelet[2801]: E1027 08:28:56.522818 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.523097 kubelet[2801]: E1027 08:28:56.523081 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.523097 kubelet[2801]: W1027 08:28:56.523094 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.523151 kubelet[2801]: E1027 08:28:56.523105 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.523325 kubelet[2801]: E1027 08:28:56.523309 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.523325 kubelet[2801]: W1027 08:28:56.523322 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.523372 kubelet[2801]: E1027 08:28:56.523332 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.523536 kubelet[2801]: E1027 08:28:56.523516 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.523536 kubelet[2801]: W1027 08:28:56.523532 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.523590 kubelet[2801]: E1027 08:28:56.523543 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.523784 kubelet[2801]: E1027 08:28:56.523766 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.523784 kubelet[2801]: W1027 08:28:56.523780 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.523829 kubelet[2801]: E1027 08:28:56.523789 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.524046 kubelet[2801]: E1027 08:28:56.524031 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.524046 kubelet[2801]: W1027 08:28:56.524042 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.524091 kubelet[2801]: E1027 08:28:56.524052 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.524257 kubelet[2801]: E1027 08:28:56.524241 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.524257 kubelet[2801]: W1027 08:28:56.524255 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.524302 kubelet[2801]: E1027 08:28:56.524265 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.524509 kubelet[2801]: E1027 08:28:56.524493 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.524509 kubelet[2801]: W1027 08:28:56.524506 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.524556 kubelet[2801]: E1027 08:28:56.524517 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.524729 kubelet[2801]: E1027 08:28:56.524714 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.524729 kubelet[2801]: W1027 08:28:56.524725 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.524775 kubelet[2801]: E1027 08:28:56.524733 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.524919 kubelet[2801]: E1027 08:28:56.524904 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.524919 kubelet[2801]: W1027 08:28:56.524914 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.524973 kubelet[2801]: E1027 08:28:56.524921 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.525136 kubelet[2801]: E1027 08:28:56.525121 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.525136 kubelet[2801]: W1027 08:28:56.525131 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.525178 kubelet[2801]: E1027 08:28:56.525139 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.525379 kubelet[2801]: E1027 08:28:56.525362 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.525379 kubelet[2801]: W1027 08:28:56.525376 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.525424 kubelet[2801]: E1027 08:28:56.525386 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.525608 kubelet[2801]: E1027 08:28:56.525591 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.525608 kubelet[2801]: W1027 08:28:56.525604 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.525664 kubelet[2801]: E1027 08:28:56.525614 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.525834 kubelet[2801]: E1027 08:28:56.525818 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.525834 kubelet[2801]: W1027 08:28:56.525831 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.525880 kubelet[2801]: E1027 08:28:56.525841 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.526131 kubelet[2801]: E1027 08:28:56.526114 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.526131 kubelet[2801]: W1027 08:28:56.526128 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.526189 kubelet[2801]: E1027 08:28:56.526139 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.526501 kubelet[2801]: E1027 08:28:56.526484 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.526532 kubelet[2801]: W1027 08:28:56.526509 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.526532 kubelet[2801]: E1027 08:28:56.526519 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.530106 containerd[1635]: time="2025-10-27T08:28:56.530049788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86dc78f9f6-xtr9f,Uid:c662dbb7-417a-43b4-a262-57d6fb3f1935,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a5879d5c5b53c5ff0f858e681c1d4353fc93825fc8501da0ce0fc61ffdc08ef\"" Oct 27 08:28:56.530908 kubelet[2801]: E1027 08:28:56.530805 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:56.532605 containerd[1635]: time="2025-10-27T08:28:56.532525439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 27 08:28:56.671828 kubelet[2801]: E1027 08:28:56.671786 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:56.671828 kubelet[2801]: W1027 08:28:56.671814 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:56.671828 kubelet[2801]: E1027 08:28:56.671837 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:56.695762 containerd[1635]: time="2025-10-27T08:28:56.695684081Z" level=info msg="connecting to shim 100bfdb77582d1817560d61586a9294f448099ed97765ecb78cadb777622c1da" address="unix:///run/containerd/s/496d8e7fdc368353dc2d8bf06fd704671880cb76dc6e4eab913d8b939682b817" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:28:56.736268 systemd[1]: Started cri-containerd-100bfdb77582d1817560d61586a9294f448099ed97765ecb78cadb777622c1da.scope - libcontainer container 100bfdb77582d1817560d61586a9294f448099ed97765ecb78cadb777622c1da. Oct 27 08:28:56.772555 containerd[1635]: time="2025-10-27T08:28:56.772394781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x6h44,Uid:a561ddb4-60ff-4a07-8dea-d7e11ab7881b,Namespace:calico-system,Attempt:0,} returns sandbox id \"100bfdb77582d1817560d61586a9294f448099ed97765ecb78cadb777622c1da\"" Oct 27 08:28:56.773218 kubelet[2801]: E1027 08:28:56.773187 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:58.008818 kubelet[2801]: E1027 08:28:58.008739 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:28:58.127291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2498673567.mount: Deactivated successfully. Oct 27 08:28:58.572625 containerd[1635]: time="2025-10-27T08:28:58.572536817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:58.573478 containerd[1635]: time="2025-10-27T08:28:58.573432745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 27 08:28:58.574746 containerd[1635]: time="2025-10-27T08:28:58.574694885Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:58.576916 containerd[1635]: time="2025-10-27T08:28:58.576872634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:58.577510 containerd[1635]: time="2025-10-27T08:28:58.577446933Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.044839364s" Oct 27 08:28:58.577510 containerd[1635]: time="2025-10-27T08:28:58.577496474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 27 08:28:58.578632 containerd[1635]: time="2025-10-27T08:28:58.578470483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 27 08:28:58.593687 containerd[1635]: time="2025-10-27T08:28:58.593641546Z" level=info msg="CreateContainer within sandbox \"3a5879d5c5b53c5ff0f858e681c1d4353fc93825fc8501da0ce0fc61ffdc08ef\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 27 08:28:58.601600 containerd[1635]: time="2025-10-27T08:28:58.601522819Z" level=info msg="Container 9484aeabcd961ba08bea53ae1bed94d9ae2a3963f8e120a4e700cfee462e33b1: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:28:58.610296 containerd[1635]: time="2025-10-27T08:28:58.610235548Z" level=info msg="CreateContainer within sandbox \"3a5879d5c5b53c5ff0f858e681c1d4353fc93825fc8501da0ce0fc61ffdc08ef\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9484aeabcd961ba08bea53ae1bed94d9ae2a3963f8e120a4e700cfee462e33b1\"" Oct 27 08:28:58.611986 containerd[1635]: time="2025-10-27T08:28:58.611209718Z" level=info msg="StartContainer for \"9484aeabcd961ba08bea53ae1bed94d9ae2a3963f8e120a4e700cfee462e33b1\"" Oct 27 08:28:58.612769 containerd[1635]: time="2025-10-27T08:28:58.612731120Z" level=info msg="connecting to shim 9484aeabcd961ba08bea53ae1bed94d9ae2a3963f8e120a4e700cfee462e33b1" address="unix:///run/containerd/s/c697a9ebaf7fd10ee2fde45d8a3b917262a4662688c60cae373ac245f335b04c" protocol=ttrpc version=3 Oct 27 08:28:58.638138 systemd[1]: Started cri-containerd-9484aeabcd961ba08bea53ae1bed94d9ae2a3963f8e120a4e700cfee462e33b1.scope - libcontainer container 9484aeabcd961ba08bea53ae1bed94d9ae2a3963f8e120a4e700cfee462e33b1. Oct 27 08:28:58.693260 containerd[1635]: time="2025-10-27T08:28:58.693168096Z" level=info msg="StartContainer for \"9484aeabcd961ba08bea53ae1bed94d9ae2a3963f8e120a4e700cfee462e33b1\" returns successfully" Oct 27 08:28:59.068541 kubelet[2801]: E1027 08:28:59.068504 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:28:59.130455 kubelet[2801]: E1027 08:28:59.130421 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.130455 kubelet[2801]: W1027 08:28:59.130444 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.130601 kubelet[2801]: E1027 08:28:59.130465 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.130655 kubelet[2801]: E1027 08:28:59.130640 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.130655 kubelet[2801]: W1027 08:28:59.130649 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.130704 kubelet[2801]: E1027 08:28:59.130658 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.130830 kubelet[2801]: E1027 08:28:59.130815 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.130830 kubelet[2801]: W1027 08:28:59.130824 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.130879 kubelet[2801]: E1027 08:28:59.130831 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.131053 kubelet[2801]: E1027 08:28:59.131039 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.131053 kubelet[2801]: W1027 08:28:59.131049 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.131108 kubelet[2801]: E1027 08:28:59.131058 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.131249 kubelet[2801]: E1027 08:28:59.131234 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.131249 kubelet[2801]: W1027 08:28:59.131242 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.131294 kubelet[2801]: E1027 08:28:59.131250 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.131438 kubelet[2801]: E1027 08:28:59.131411 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.131438 kubelet[2801]: W1027 08:28:59.131422 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.131438 kubelet[2801]: E1027 08:28:59.131432 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.131599 kubelet[2801]: E1027 08:28:59.131584 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.131599 kubelet[2801]: W1027 08:28:59.131593 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.131669 kubelet[2801]: E1027 08:28:59.131600 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.131766 kubelet[2801]: E1027 08:28:59.131751 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.131766 kubelet[2801]: W1027 08:28:59.131759 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.131816 kubelet[2801]: E1027 08:28:59.131767 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.131948 kubelet[2801]: E1027 08:28:59.131921 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.131948 kubelet[2801]: W1027 08:28:59.131930 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.132000 kubelet[2801]: E1027 08:28:59.131950 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.132116 kubelet[2801]: E1027 08:28:59.132101 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.132116 kubelet[2801]: W1027 08:28:59.132110 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.132166 kubelet[2801]: E1027 08:28:59.132117 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.132277 kubelet[2801]: E1027 08:28:59.132263 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.132277 kubelet[2801]: W1027 08:28:59.132272 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.132323 kubelet[2801]: E1027 08:28:59.132279 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.132452 kubelet[2801]: E1027 08:28:59.132437 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.132452 kubelet[2801]: W1027 08:28:59.132445 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.132555 kubelet[2801]: E1027 08:28:59.132453 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.132651 kubelet[2801]: E1027 08:28:59.132637 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.132651 kubelet[2801]: W1027 08:28:59.132645 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.132691 kubelet[2801]: E1027 08:28:59.132653 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.132822 kubelet[2801]: E1027 08:28:59.132808 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.132822 kubelet[2801]: W1027 08:28:59.132816 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.132875 kubelet[2801]: E1027 08:28:59.132824 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.133000 kubelet[2801]: E1027 08:28:59.132988 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.133000 kubelet[2801]: W1027 08:28:59.132996 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.133050 kubelet[2801]: E1027 08:28:59.133004 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.141374 kubelet[2801]: E1027 08:28:59.141340 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.141374 kubelet[2801]: W1027 08:28:59.141361 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.141477 kubelet[2801]: E1027 08:28:59.141381 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.141598 kubelet[2801]: E1027 08:28:59.141571 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.141598 kubelet[2801]: W1027 08:28:59.141582 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.141598 kubelet[2801]: E1027 08:28:59.141590 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.141851 kubelet[2801]: E1027 08:28:59.141826 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.141851 kubelet[2801]: W1027 08:28:59.141840 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.141907 kubelet[2801]: E1027 08:28:59.141852 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.142058 kubelet[2801]: E1027 08:28:59.142042 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.142058 kubelet[2801]: W1027 08:28:59.142052 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.142114 kubelet[2801]: E1027 08:28:59.142060 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.142243 kubelet[2801]: E1027 08:28:59.142228 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.142243 kubelet[2801]: W1027 08:28:59.142237 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.142294 kubelet[2801]: E1027 08:28:59.142244 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.142459 kubelet[2801]: E1027 08:28:59.142444 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.142459 kubelet[2801]: W1027 08:28:59.142453 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.142517 kubelet[2801]: E1027 08:28:59.142461 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.142731 kubelet[2801]: E1027 08:28:59.142709 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.142731 kubelet[2801]: W1027 08:28:59.142724 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.142789 kubelet[2801]: E1027 08:28:59.142733 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.142933 kubelet[2801]: E1027 08:28:59.142913 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.142933 kubelet[2801]: W1027 08:28:59.142925 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.142933 kubelet[2801]: E1027 08:28:59.142934 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.143121 kubelet[2801]: E1027 08:28:59.143106 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.143121 kubelet[2801]: W1027 08:28:59.143116 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.143181 kubelet[2801]: E1027 08:28:59.143124 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.143281 kubelet[2801]: E1027 08:28:59.143267 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.143281 kubelet[2801]: W1027 08:28:59.143277 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.143330 kubelet[2801]: E1027 08:28:59.143285 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.143515 kubelet[2801]: E1027 08:28:59.143500 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.143515 kubelet[2801]: W1027 08:28:59.143510 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.143572 kubelet[2801]: E1027 08:28:59.143518 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.143815 kubelet[2801]: E1027 08:28:59.143798 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.143815 kubelet[2801]: W1027 08:28:59.143811 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.143869 kubelet[2801]: E1027 08:28:59.143820 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.144052 kubelet[2801]: E1027 08:28:59.144034 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.144052 kubelet[2801]: W1027 08:28:59.144045 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.144173 kubelet[2801]: E1027 08:28:59.144053 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.144269 kubelet[2801]: E1027 08:28:59.144250 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.144269 kubelet[2801]: W1027 08:28:59.144262 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.144327 kubelet[2801]: E1027 08:28:59.144273 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.144472 kubelet[2801]: E1027 08:28:59.144455 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.144472 kubelet[2801]: W1027 08:28:59.144465 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.144523 kubelet[2801]: E1027 08:28:59.144473 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.144677 kubelet[2801]: E1027 08:28:59.144662 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.144677 kubelet[2801]: W1027 08:28:59.144671 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.144677 kubelet[2801]: E1027 08:28:59.144678 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.144898 kubelet[2801]: E1027 08:28:59.144873 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.144898 kubelet[2801]: W1027 08:28:59.144886 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.144898 kubelet[2801]: E1027 08:28:59.144894 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.145288 kubelet[2801]: E1027 08:28:59.145271 2801 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:28:59.145288 kubelet[2801]: W1027 08:28:59.145284 2801 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:28:59.145348 kubelet[2801]: E1027 08:28:59.145292 2801 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:28:59.965113 containerd[1635]: time="2025-10-27T08:28:59.965067672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:59.965807 containerd[1635]: time="2025-10-27T08:28:59.965739543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 27 08:28:59.966862 containerd[1635]: time="2025-10-27T08:28:59.966813916Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:59.969003 containerd[1635]: time="2025-10-27T08:28:59.968973384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:28:59.969494 containerd[1635]: time="2025-10-27T08:28:59.969459977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.39095382s" Oct 27 08:28:59.969494 containerd[1635]: time="2025-10-27T08:28:59.969489156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 27 08:28:59.973227 containerd[1635]: time="2025-10-27T08:28:59.973188578Z" level=info msg="CreateContainer within sandbox \"100bfdb77582d1817560d61586a9294f448099ed97765ecb78cadb777622c1da\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 27 08:28:59.982923 containerd[1635]: time="2025-10-27T08:28:59.982866421Z" level=info msg="Container 9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:28:59.995413 containerd[1635]: time="2025-10-27T08:28:59.995341943Z" level=info msg="CreateContainer within sandbox \"100bfdb77582d1817560d61586a9294f448099ed97765ecb78cadb777622c1da\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c\"" Oct 27 08:28:59.996030 containerd[1635]: time="2025-10-27T08:28:59.995999706Z" level=info msg="StartContainer for \"9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c\"" Oct 27 08:28:59.997912 containerd[1635]: time="2025-10-27T08:28:59.997837296Z" level=info msg="connecting to shim 9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c" address="unix:///run/containerd/s/496d8e7fdc368353dc2d8bf06fd704671880cb76dc6e4eab913d8b939682b817" protocol=ttrpc version=3 Oct 27 08:29:00.009589 kubelet[2801]: E1027 08:29:00.009212 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:29:00.025248 systemd[1]: Started cri-containerd-9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c.scope - libcontainer container 9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c. Oct 27 08:29:00.069854 containerd[1635]: time="2025-10-27T08:29:00.069795873Z" level=info msg="StartContainer for \"9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c\" returns successfully" Oct 27 08:29:00.073093 kubelet[2801]: I1027 08:29:00.072565 2801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:29:00.073093 kubelet[2801]: E1027 08:29:00.072971 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:00.092783 systemd[1]: cri-containerd-9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c.scope: Deactivated successfully. Oct 27 08:29:00.093498 systemd[1]: cri-containerd-9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c.scope: Consumed 48ms CPU time, 6.3M memory peak, 4.6M written to disk. Oct 27 08:29:00.096377 containerd[1635]: time="2025-10-27T08:29:00.096338293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c\" id:\"9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c\" pid:3520 exited_at:{seconds:1761553740 nanos:95653261}" Oct 27 08:29:00.096493 containerd[1635]: time="2025-10-27T08:29:00.096428987Z" level=info msg="received exit event container_id:\"9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c\" id:\"9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c\" pid:3520 exited_at:{seconds:1761553740 nanos:95653261}" Oct 27 08:29:00.121392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d26810a38b60af5b2b31dcde0273a5dec54ee75186193c0241d44988f9e773c-rootfs.mount: Deactivated successfully. Oct 27 08:29:00.580497 kubelet[2801]: I1027 08:29:00.580179 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86dc78f9f6-xtr9f" podStartSLOduration=3.534018741 podStartE2EDuration="5.580164014s" podCreationTimestamp="2025-10-27 08:28:55 +0000 UTC" firstStartedPulling="2025-10-27 08:28:56.532184712 +0000 UTC m=+20.629063806" lastFinishedPulling="2025-10-27 08:28:58.578329995 +0000 UTC m=+22.675209079" observedRunningTime="2025-10-27 08:28:59.078377363 +0000 UTC m=+23.175256437" watchObservedRunningTime="2025-10-27 08:29:00.580164014 +0000 UTC m=+24.677043098" Oct 27 08:29:01.076651 kubelet[2801]: E1027 08:29:01.076328 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:01.076651 kubelet[2801]: E1027 08:29:01.076453 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:01.077656 containerd[1635]: time="2025-10-27T08:29:01.077607474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 27 08:29:02.008204 kubelet[2801]: E1027 08:29:02.008147 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:29:02.077538 kubelet[2801]: E1027 08:29:02.077490 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:04.008332 kubelet[2801]: E1027 08:29:04.008236 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:29:04.102022 containerd[1635]: time="2025-10-27T08:29:04.101968440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:29:04.102680 containerd[1635]: time="2025-10-27T08:29:04.102656036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 27 08:29:04.103836 containerd[1635]: time="2025-10-27T08:29:04.103786372Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:29:04.105633 containerd[1635]: time="2025-10-27T08:29:04.105600566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:29:04.106214 containerd[1635]: time="2025-10-27T08:29:04.106175945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.028508099s" Oct 27 08:29:04.106214 containerd[1635]: time="2025-10-27T08:29:04.106217970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 27 08:29:04.109658 containerd[1635]: time="2025-10-27T08:29:04.109614057Z" level=info msg="CreateContainer within sandbox \"100bfdb77582d1817560d61586a9294f448099ed97765ecb78cadb777622c1da\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 27 08:29:04.117758 containerd[1635]: time="2025-10-27T08:29:04.117726894Z" level=info msg="Container a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:29:04.125910 containerd[1635]: time="2025-10-27T08:29:04.125864140Z" level=info msg="CreateContainer within sandbox \"100bfdb77582d1817560d61586a9294f448099ed97765ecb78cadb777622c1da\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572\"" Oct 27 08:29:04.126466 containerd[1635]: time="2025-10-27T08:29:04.126438907Z" level=info msg="StartContainer for \"a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572\"" Oct 27 08:29:04.127893 containerd[1635]: time="2025-10-27T08:29:04.127867399Z" level=info msg="connecting to shim a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572" address="unix:///run/containerd/s/496d8e7fdc368353dc2d8bf06fd704671880cb76dc6e4eab913d8b939682b817" protocol=ttrpc version=3 Oct 27 08:29:04.148128 systemd[1]: Started cri-containerd-a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572.scope - libcontainer container a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572. Oct 27 08:29:04.192195 containerd[1635]: time="2025-10-27T08:29:04.192148588Z" level=info msg="StartContainer for \"a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572\" returns successfully" Oct 27 08:29:05.085700 kubelet[2801]: E1027 08:29:05.085657 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:05.388633 systemd[1]: cri-containerd-a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572.scope: Deactivated successfully. Oct 27 08:29:05.389230 systemd[1]: cri-containerd-a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572.scope: Consumed 592ms CPU time, 177.3M memory peak, 3.3M read from disk, 171.3M written to disk. Oct 27 08:29:05.390711 containerd[1635]: time="2025-10-27T08:29:05.390667950Z" level=info msg="received exit event container_id:\"a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572\" id:\"a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572\" pid:3579 exited_at:{seconds:1761553745 nanos:390383383}" Oct 27 08:29:05.391195 containerd[1635]: time="2025-10-27T08:29:05.390681998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572\" id:\"a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572\" pid:3579 exited_at:{seconds:1761553745 nanos:390383383}" Oct 27 08:29:05.415296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8127905631005cb564516f6ff123bcd1849a65a5e8a0b0f0a55102877092572-rootfs.mount: Deactivated successfully. Oct 27 08:29:05.662135 kubelet[2801]: I1027 08:29:05.661995 2801 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 27 08:29:06.025024 systemd[1]: Created slice kubepods-besteffort-podc11176f9_c15d_4aff_9a2f_9db19f9df938.slice - libcontainer container kubepods-besteffort-podc11176f9_c15d_4aff_9a2f_9db19f9df938.slice. Oct 27 08:29:06.031182 containerd[1635]: time="2025-10-27T08:29:06.031063380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6585l,Uid:c11176f9-c15d-4aff-9a2f-9db19f9df938,Namespace:calico-system,Attempt:0,}" Oct 27 08:29:06.036820 systemd[1]: Created slice kubepods-besteffort-pod082222f6_7d6e_417e_ac7d_32f9df4dff89.slice - libcontainer container kubepods-besteffort-pod082222f6_7d6e_417e_ac7d_32f9df4dff89.slice. Oct 27 08:29:06.055569 systemd[1]: Created slice kubepods-burstable-pod3e3fd7e5_9a69_4009_b8ff_395503b2b2e3.slice - libcontainer container kubepods-burstable-pod3e3fd7e5_9a69_4009_b8ff_395503b2b2e3.slice. Oct 27 08:29:06.071183 systemd[1]: Created slice kubepods-besteffort-pod8a6b3613_c6a1_49d5_80fe_fe16a86ca9d2.slice - libcontainer container kubepods-besteffort-pod8a6b3613_c6a1_49d5_80fe_fe16a86ca9d2.slice. Oct 27 08:29:06.082739 systemd[1]: Created slice kubepods-burstable-podfe014e01_beff_4bf8_ab4a_6b42d97b928c.slice - libcontainer container kubepods-burstable-podfe014e01_beff_4bf8_ab4a_6b42d97b928c.slice. Oct 27 08:29:06.097571 kubelet[2801]: I1027 08:29:06.096622 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2tw\" (UniqueName: \"kubernetes.io/projected/8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2-kube-api-access-qm2tw\") pod \"goldmane-666569f655-vqr2m\" (UID: \"8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2\") " pod="calico-system/goldmane-666569f655-vqr2m" Oct 27 08:29:06.097571 kubelet[2801]: I1027 08:29:06.096667 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/082222f6-7d6e-417e-ac7d-32f9df4dff89-calico-apiserver-certs\") pod \"calico-apiserver-c64f56595-lp5c4\" (UID: \"082222f6-7d6e-417e-ac7d-32f9df4dff89\") " pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" Oct 27 08:29:06.097571 kubelet[2801]: I1027 08:29:06.096687 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e3fd7e5-9a69-4009-b8ff-395503b2b2e3-config-volume\") pod \"coredns-674b8bbfcf-86jt5\" (UID: \"3e3fd7e5-9a69-4009-b8ff-395503b2b2e3\") " pod="kube-system/coredns-674b8bbfcf-86jt5" Oct 27 08:29:06.097571 kubelet[2801]: I1027 08:29:06.096705 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe014e01-beff-4bf8-ab4a-6b42d97b928c-config-volume\") pod \"coredns-674b8bbfcf-4sbmc\" (UID: \"fe014e01-beff-4bf8-ab4a-6b42d97b928c\") " pod="kube-system/coredns-674b8bbfcf-4sbmc" Oct 27 08:29:06.097571 kubelet[2801]: I1027 08:29:06.096724 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnbgk\" (UniqueName: \"kubernetes.io/projected/fe014e01-beff-4bf8-ab4a-6b42d97b928c-kube-api-access-lnbgk\") pod \"coredns-674b8bbfcf-4sbmc\" (UID: \"fe014e01-beff-4bf8-ab4a-6b42d97b928c\") " pod="kube-system/coredns-674b8bbfcf-4sbmc" Oct 27 08:29:06.098161 kubelet[2801]: I1027 08:29:06.096740 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncdw9\" (UniqueName: \"kubernetes.io/projected/70c109db-2874-4e1e-8da6-346addba097a-kube-api-access-ncdw9\") pod \"whisker-9466b7d98-hdc7h\" (UID: \"70c109db-2874-4e1e-8da6-346addba097a\") " pod="calico-system/whisker-9466b7d98-hdc7h" Oct 27 08:29:06.098161 kubelet[2801]: I1027 08:29:06.096756 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2-goldmane-key-pair\") pod \"goldmane-666569f655-vqr2m\" (UID: \"8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2\") " pod="calico-system/goldmane-666569f655-vqr2m" Oct 27 08:29:06.098161 kubelet[2801]: I1027 08:29:06.096772 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70c109db-2874-4e1e-8da6-346addba097a-whisker-ca-bundle\") pod \"whisker-9466b7d98-hdc7h\" (UID: \"70c109db-2874-4e1e-8da6-346addba097a\") " pod="calico-system/whisker-9466b7d98-hdc7h" Oct 27 08:29:06.098161 kubelet[2801]: I1027 08:29:06.096795 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9f02b196-86ab-47f3-85f7-4a69adbdcd03-calico-apiserver-certs\") pod \"calico-apiserver-c64f56595-lxn6t\" (UID: \"9f02b196-86ab-47f3-85f7-4a69adbdcd03\") " pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" Oct 27 08:29:06.098161 kubelet[2801]: I1027 08:29:06.096817 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2-config\") pod \"goldmane-666569f655-vqr2m\" (UID: \"8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2\") " pod="calico-system/goldmane-666569f655-vqr2m" Oct 27 08:29:06.098335 kubelet[2801]: I1027 08:29:06.096840 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzjz\" (UniqueName: \"kubernetes.io/projected/9f02b196-86ab-47f3-85f7-4a69adbdcd03-kube-api-access-vkzjz\") pod \"calico-apiserver-c64f56595-lxn6t\" (UID: \"9f02b196-86ab-47f3-85f7-4a69adbdcd03\") " pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" Oct 27 08:29:06.098335 kubelet[2801]: I1027 08:29:06.096860 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58e52a6b-4ff7-410b-ae51-43b90491a215-tigera-ca-bundle\") pod \"calico-kube-controllers-9b5b6f478-9qcvc\" (UID: \"58e52a6b-4ff7-410b-ae51-43b90491a215\") " pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" Oct 27 08:29:06.098335 kubelet[2801]: I1027 08:29:06.096883 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9pp\" (UniqueName: \"kubernetes.io/projected/58e52a6b-4ff7-410b-ae51-43b90491a215-kube-api-access-8n9pp\") pod \"calico-kube-controllers-9b5b6f478-9qcvc\" (UID: \"58e52a6b-4ff7-410b-ae51-43b90491a215\") " pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" Oct 27 08:29:06.101989 kubelet[2801]: E1027 08:29:06.099239 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:06.101989 kubelet[2801]: I1027 08:29:06.100222 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pf8r\" (UniqueName: \"kubernetes.io/projected/082222f6-7d6e-417e-ac7d-32f9df4dff89-kube-api-access-7pf8r\") pod \"calico-apiserver-c64f56595-lp5c4\" (UID: \"082222f6-7d6e-417e-ac7d-32f9df4dff89\") " pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" Oct 27 08:29:06.101989 kubelet[2801]: I1027 08:29:06.100279 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2-goldmane-ca-bundle\") pod \"goldmane-666569f655-vqr2m\" (UID: \"8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2\") " pod="calico-system/goldmane-666569f655-vqr2m" Oct 27 08:29:06.101989 kubelet[2801]: I1027 08:29:06.100302 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8dgz\" (UniqueName: \"kubernetes.io/projected/3e3fd7e5-9a69-4009-b8ff-395503b2b2e3-kube-api-access-r8dgz\") pod \"coredns-674b8bbfcf-86jt5\" (UID: \"3e3fd7e5-9a69-4009-b8ff-395503b2b2e3\") " pod="kube-system/coredns-674b8bbfcf-86jt5" Oct 27 08:29:06.101989 kubelet[2801]: I1027 08:29:06.100326 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/70c109db-2874-4e1e-8da6-346addba097a-whisker-backend-key-pair\") pod \"whisker-9466b7d98-hdc7h\" (UID: \"70c109db-2874-4e1e-8da6-346addba097a\") " pod="calico-system/whisker-9466b7d98-hdc7h" Oct 27 08:29:06.100296 systemd[1]: Created slice kubepods-besteffort-pod58e52a6b_4ff7_410b_ae51_43b90491a215.slice - libcontainer container kubepods-besteffort-pod58e52a6b_4ff7_410b_ae51_43b90491a215.slice. Oct 27 08:29:06.102213 containerd[1635]: time="2025-10-27T08:29:06.101996176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 27 08:29:06.119577 systemd[1]: Created slice kubepods-besteffort-pod9f02b196_86ab_47f3_85f7_4a69adbdcd03.slice - libcontainer container kubepods-besteffort-pod9f02b196_86ab_47f3_85f7_4a69adbdcd03.slice. Oct 27 08:29:06.132428 systemd[1]: Created slice kubepods-besteffort-pod70c109db_2874_4e1e_8da6_346addba097a.slice - libcontainer container kubepods-besteffort-pod70c109db_2874_4e1e_8da6_346addba097a.slice. Oct 27 08:29:06.186983 containerd[1635]: time="2025-10-27T08:29:06.186878103Z" level=error msg="Failed to destroy network for sandbox \"eb1f484126da90fc13f8e8ed04248fbdf8283d7afc6afb738276b884bde89b81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.189438 systemd[1]: run-netns-cni\x2dae44187e\x2ddea6\x2d5989\x2d08e5\x2da0b38bec29d3.mount: Deactivated successfully. Oct 27 08:29:06.190877 containerd[1635]: time="2025-10-27T08:29:06.190743902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6585l,Uid:c11176f9-c15d-4aff-9a2f-9db19f9df938,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1f484126da90fc13f8e8ed04248fbdf8283d7afc6afb738276b884bde89b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.191116 kubelet[2801]: E1027 08:29:06.191054 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1f484126da90fc13f8e8ed04248fbdf8283d7afc6afb738276b884bde89b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.191183 kubelet[2801]: E1027 08:29:06.191160 2801 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1f484126da90fc13f8e8ed04248fbdf8283d7afc6afb738276b884bde89b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6585l" Oct 27 08:29:06.191213 kubelet[2801]: E1027 08:29:06.191192 2801 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1f484126da90fc13f8e8ed04248fbdf8283d7afc6afb738276b884bde89b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6585l" Oct 27 08:29:06.191297 kubelet[2801]: E1027 08:29:06.191252 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6585l_calico-system(c11176f9-c15d-4aff-9a2f-9db19f9df938)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6585l_calico-system(c11176f9-c15d-4aff-9a2f-9db19f9df938)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb1f484126da90fc13f8e8ed04248fbdf8283d7afc6afb738276b884bde89b81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:29:06.352412 containerd[1635]: time="2025-10-27T08:29:06.352263903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c64f56595-lp5c4,Uid:082222f6-7d6e-417e-ac7d-32f9df4dff89,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:29:06.371056 kubelet[2801]: E1027 08:29:06.370999 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:06.371841 containerd[1635]: time="2025-10-27T08:29:06.371763409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-86jt5,Uid:3e3fd7e5-9a69-4009-b8ff-395503b2b2e3,Namespace:kube-system,Attempt:0,}" Oct 27 08:29:06.379789 containerd[1635]: time="2025-10-27T08:29:06.379737727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vqr2m,Uid:8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2,Namespace:calico-system,Attempt:0,}" Oct 27 08:29:06.391964 kubelet[2801]: E1027 08:29:06.391302 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:06.393521 containerd[1635]: time="2025-10-27T08:29:06.393132868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4sbmc,Uid:fe014e01-beff-4bf8-ab4a-6b42d97b928c,Namespace:kube-system,Attempt:0,}" Oct 27 08:29:06.408403 containerd[1635]: time="2025-10-27T08:29:06.408355915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b5b6f478-9qcvc,Uid:58e52a6b-4ff7-410b-ae51-43b90491a215,Namespace:calico-system,Attempt:0,}" Oct 27 08:29:06.432019 containerd[1635]: time="2025-10-27T08:29:06.431599834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c64f56595-lxn6t,Uid:9f02b196-86ab-47f3-85f7-4a69adbdcd03,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:29:06.439244 containerd[1635]: time="2025-10-27T08:29:06.439198601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9466b7d98-hdc7h,Uid:70c109db-2874-4e1e-8da6-346addba097a,Namespace:calico-system,Attempt:0,}" Oct 27 08:29:06.450659 containerd[1635]: time="2025-10-27T08:29:06.450619048Z" level=error msg="Failed to destroy network for sandbox \"09b1fdfd0c5e843c2883a82b561ef6e38d058a4e95a2bb395a53b19e8673010b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.459877 systemd[1]: run-netns-cni\x2d24a27697\x2def1d\x2dfd9f\x2d658b\x2dba647723b99b.mount: Deactivated successfully. Oct 27 08:29:06.463439 containerd[1635]: time="2025-10-27T08:29:06.463389856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c64f56595-lp5c4,Uid:082222f6-7d6e-417e-ac7d-32f9df4dff89,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b1fdfd0c5e843c2883a82b561ef6e38d058a4e95a2bb395a53b19e8673010b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.464035 kubelet[2801]: E1027 08:29:06.463970 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b1fdfd0c5e843c2883a82b561ef6e38d058a4e95a2bb395a53b19e8673010b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.464133 kubelet[2801]: E1027 08:29:06.464066 2801 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b1fdfd0c5e843c2883a82b561ef6e38d058a4e95a2bb395a53b19e8673010b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" Oct 27 08:29:06.464133 kubelet[2801]: E1027 08:29:06.464090 2801 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b1fdfd0c5e843c2883a82b561ef6e38d058a4e95a2bb395a53b19e8673010b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" Oct 27 08:29:06.465094 kubelet[2801]: E1027 08:29:06.464988 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c64f56595-lp5c4_calico-apiserver(082222f6-7d6e-417e-ac7d-32f9df4dff89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c64f56595-lp5c4_calico-apiserver(082222f6-7d6e-417e-ac7d-32f9df4dff89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09b1fdfd0c5e843c2883a82b561ef6e38d058a4e95a2bb395a53b19e8673010b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" podUID="082222f6-7d6e-417e-ac7d-32f9df4dff89" Oct 27 08:29:06.524742 containerd[1635]: time="2025-10-27T08:29:06.524682376Z" level=error msg="Failed to destroy network for sandbox \"865ebf2db77192f1d6dc4008fe5c74bd11f0cf8243c2414cf2750c4f73a52828\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.526622 containerd[1635]: time="2025-10-27T08:29:06.526541947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-86jt5,Uid:3e3fd7e5-9a69-4009-b8ff-395503b2b2e3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"865ebf2db77192f1d6dc4008fe5c74bd11f0cf8243c2414cf2750c4f73a52828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.527011 kubelet[2801]: E1027 08:29:06.526932 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865ebf2db77192f1d6dc4008fe5c74bd11f0cf8243c2414cf2750c4f73a52828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.527140 kubelet[2801]: E1027 08:29:06.527098 2801 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865ebf2db77192f1d6dc4008fe5c74bd11f0cf8243c2414cf2750c4f73a52828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-86jt5" Oct 27 08:29:06.527203 kubelet[2801]: E1027 08:29:06.527177 2801 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865ebf2db77192f1d6dc4008fe5c74bd11f0cf8243c2414cf2750c4f73a52828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-86jt5" Oct 27 08:29:06.527455 kubelet[2801]: E1027 08:29:06.527284 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-86jt5_kube-system(3e3fd7e5-9a69-4009-b8ff-395503b2b2e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-86jt5_kube-system(3e3fd7e5-9a69-4009-b8ff-395503b2b2e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"865ebf2db77192f1d6dc4008fe5c74bd11f0cf8243c2414cf2750c4f73a52828\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-86jt5" podUID="3e3fd7e5-9a69-4009-b8ff-395503b2b2e3" Oct 27 08:29:06.529165 systemd[1]: run-netns-cni\x2de58d6358\x2db10a\x2dca02\x2d47e8\x2d87f16ceaff98.mount: Deactivated successfully. Oct 27 08:29:06.533059 containerd[1635]: time="2025-10-27T08:29:06.533000840Z" level=error msg="Failed to destroy network for sandbox \"44fa377080eec1c71aa5bc0f078f8172b935a2054f1fd3aa1cd72da20004163f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.535265 containerd[1635]: time="2025-10-27T08:29:06.535229947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vqr2m,Uid:8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44fa377080eec1c71aa5bc0f078f8172b935a2054f1fd3aa1cd72da20004163f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.535752 kubelet[2801]: E1027 08:29:06.535591 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44fa377080eec1c71aa5bc0f078f8172b935a2054f1fd3aa1cd72da20004163f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.535814 kubelet[2801]: E1027 08:29:06.535798 2801 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44fa377080eec1c71aa5bc0f078f8172b935a2054f1fd3aa1cd72da20004163f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vqr2m" Oct 27 08:29:06.535842 kubelet[2801]: E1027 08:29:06.535820 2801 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44fa377080eec1c71aa5bc0f078f8172b935a2054f1fd3aa1cd72da20004163f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vqr2m" Oct 27 08:29:06.537093 kubelet[2801]: E1027 08:29:06.537056 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vqr2m_calico-system(8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vqr2m_calico-system(8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44fa377080eec1c71aa5bc0f078f8172b935a2054f1fd3aa1cd72da20004163f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vqr2m" podUID="8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2" Oct 27 08:29:06.539959 containerd[1635]: time="2025-10-27T08:29:06.539655198Z" level=error msg="Failed to destroy network for sandbox \"fdb706ebd4287a43adfeef0848f427cec9bb2e9f6f311524cb860c4ffc83c323\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.545498 containerd[1635]: time="2025-10-27T08:29:06.545352620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b5b6f478-9qcvc,Uid:58e52a6b-4ff7-410b-ae51-43b90491a215,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb706ebd4287a43adfeef0848f427cec9bb2e9f6f311524cb860c4ffc83c323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.546047 kubelet[2801]: E1027 08:29:06.545832 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb706ebd4287a43adfeef0848f427cec9bb2e9f6f311524cb860c4ffc83c323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.546047 kubelet[2801]: E1027 08:29:06.545901 2801 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb706ebd4287a43adfeef0848f427cec9bb2e9f6f311524cb860c4ffc83c323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" Oct 27 08:29:06.546047 kubelet[2801]: E1027 08:29:06.545924 2801 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb706ebd4287a43adfeef0848f427cec9bb2e9f6f311524cb860c4ffc83c323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" Oct 27 08:29:06.546191 kubelet[2801]: E1027 08:29:06.546003 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9b5b6f478-9qcvc_calico-system(58e52a6b-4ff7-410b-ae51-43b90491a215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9b5b6f478-9qcvc_calico-system(58e52a6b-4ff7-410b-ae51-43b90491a215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdb706ebd4287a43adfeef0848f427cec9bb2e9f6f311524cb860c4ffc83c323\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" podUID="58e52a6b-4ff7-410b-ae51-43b90491a215" Oct 27 08:29:06.549664 containerd[1635]: time="2025-10-27T08:29:06.549616223Z" level=error msg="Failed to destroy network for sandbox \"f3fe7d5d9a89a3002c6c116e4626159461a6bc095423b0a1bbe9a0badad7038f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.549929 containerd[1635]: time="2025-10-27T08:29:06.549616213Z" level=error msg="Failed to destroy network for sandbox \"91dc276e238e54276458384b0e55e65e0923863413107141d209bf107bdedbf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.551721 containerd[1635]: time="2025-10-27T08:29:06.551664957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9466b7d98-hdc7h,Uid:70c109db-2874-4e1e-8da6-346addba097a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3fe7d5d9a89a3002c6c116e4626159461a6bc095423b0a1bbe9a0badad7038f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.552038 kubelet[2801]: E1027 08:29:06.551997 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3fe7d5d9a89a3002c6c116e4626159461a6bc095423b0a1bbe9a0badad7038f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.552119 kubelet[2801]: E1027 08:29:06.552066 2801 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3fe7d5d9a89a3002c6c116e4626159461a6bc095423b0a1bbe9a0badad7038f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9466b7d98-hdc7h" Oct 27 08:29:06.552119 kubelet[2801]: E1027 08:29:06.552085 2801 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3fe7d5d9a89a3002c6c116e4626159461a6bc095423b0a1bbe9a0badad7038f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9466b7d98-hdc7h" Oct 27 08:29:06.552235 kubelet[2801]: E1027 08:29:06.552145 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9466b7d98-hdc7h_calico-system(70c109db-2874-4e1e-8da6-346addba097a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9466b7d98-hdc7h_calico-system(70c109db-2874-4e1e-8da6-346addba097a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3fe7d5d9a89a3002c6c116e4626159461a6bc095423b0a1bbe9a0badad7038f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9466b7d98-hdc7h" podUID="70c109db-2874-4e1e-8da6-346addba097a" Oct 27 08:29:06.553126 containerd[1635]: time="2025-10-27T08:29:06.553064857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4sbmc,Uid:fe014e01-beff-4bf8-ab4a-6b42d97b928c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91dc276e238e54276458384b0e55e65e0923863413107141d209bf107bdedbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.554079 kubelet[2801]: E1027 08:29:06.554036 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91dc276e238e54276458384b0e55e65e0923863413107141d209bf107bdedbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.554137 kubelet[2801]: E1027 08:29:06.554078 2801 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91dc276e238e54276458384b0e55e65e0923863413107141d209bf107bdedbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4sbmc" Oct 27 08:29:06.554137 kubelet[2801]: E1027 08:29:06.554101 2801 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91dc276e238e54276458384b0e55e65e0923863413107141d209bf107bdedbf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4sbmc" Oct 27 08:29:06.554213 kubelet[2801]: E1027 08:29:06.554143 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4sbmc_kube-system(fe014e01-beff-4bf8-ab4a-6b42d97b928c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4sbmc_kube-system(fe014e01-beff-4bf8-ab4a-6b42d97b928c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91dc276e238e54276458384b0e55e65e0923863413107141d209bf107bdedbf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4sbmc" podUID="fe014e01-beff-4bf8-ab4a-6b42d97b928c" Oct 27 08:29:06.560847 containerd[1635]: time="2025-10-27T08:29:06.560778989Z" level=error msg="Failed to destroy network for sandbox \"452d70445fec64830a4ad6add262f8b54539bc940d01dec2d0166c6162e0eed4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.562616 containerd[1635]: time="2025-10-27T08:29:06.562538487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c64f56595-lxn6t,Uid:9f02b196-86ab-47f3-85f7-4a69adbdcd03,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"452d70445fec64830a4ad6add262f8b54539bc940d01dec2d0166c6162e0eed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.562866 kubelet[2801]: E1027 08:29:06.562814 2801 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452d70445fec64830a4ad6add262f8b54539bc940d01dec2d0166c6162e0eed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:29:06.562926 kubelet[2801]: E1027 08:29:06.562877 2801 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452d70445fec64830a4ad6add262f8b54539bc940d01dec2d0166c6162e0eed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" Oct 27 08:29:06.562926 kubelet[2801]: E1027 08:29:06.562897 2801 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452d70445fec64830a4ad6add262f8b54539bc940d01dec2d0166c6162e0eed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" Oct 27 08:29:06.563017 kubelet[2801]: E1027 08:29:06.562934 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c64f56595-lxn6t_calico-apiserver(9f02b196-86ab-47f3-85f7-4a69adbdcd03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c64f56595-lxn6t_calico-apiserver(9f02b196-86ab-47f3-85f7-4a69adbdcd03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"452d70445fec64830a4ad6add262f8b54539bc940d01dec2d0166c6162e0eed4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" podUID="9f02b196-86ab-47f3-85f7-4a69adbdcd03" Oct 27 08:29:07.415288 systemd[1]: run-netns-cni\x2d2827aa31\x2d73d5\x2d1bf1\x2dd4b9\x2dcae1e309b86b.mount: Deactivated successfully. Oct 27 08:29:07.415394 systemd[1]: run-netns-cni\x2d5eae5f2b\x2decf8\x2d9aa1\x2d273a\x2d6923648e0e59.mount: Deactivated successfully. Oct 27 08:29:07.415476 systemd[1]: run-netns-cni\x2dee859faf\x2d93b8\x2dc5ed\x2d88d7\x2dbeb5c989ade6.mount: Deactivated successfully. Oct 27 08:29:07.415544 systemd[1]: run-netns-cni\x2d7d33f480\x2dd742\x2d5420\x2d9ebb\x2d60e4a49a4c7a.mount: Deactivated successfully. Oct 27 08:29:07.415629 systemd[1]: run-netns-cni\x2d294dc973\x2ded8a\x2df777\x2db9b5\x2d9584c07610c0.mount: Deactivated successfully. Oct 27 08:29:15.542981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463553590.mount: Deactivated successfully. Oct 27 08:29:16.354404 containerd[1635]: time="2025-10-27T08:29:16.354325319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:29:16.355057 containerd[1635]: time="2025-10-27T08:29:16.355027665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 27 08:29:16.355989 containerd[1635]: time="2025-10-27T08:29:16.355961080Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:29:16.357853 containerd[1635]: time="2025-10-27T08:29:16.357819553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:29:16.358409 containerd[1635]: time="2025-10-27T08:29:16.358372241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.256340694s" Oct 27 08:29:16.358409 containerd[1635]: time="2025-10-27T08:29:16.358404746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 27 08:29:16.375481 containerd[1635]: time="2025-10-27T08:29:16.375425149Z" level=info msg="CreateContainer within sandbox \"100bfdb77582d1817560d61586a9294f448099ed97765ecb78cadb777622c1da\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 27 08:29:16.384912 containerd[1635]: time="2025-10-27T08:29:16.384844697Z" level=info msg="Container bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:29:16.394165 containerd[1635]: time="2025-10-27T08:29:16.394122303Z" level=info msg="CreateContainer within sandbox \"100bfdb77582d1817560d61586a9294f448099ed97765ecb78cadb777622c1da\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0\"" Oct 27 08:29:16.394617 containerd[1635]: time="2025-10-27T08:29:16.394592196Z" level=info msg="StartContainer for \"bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0\"" Oct 27 08:29:16.396040 containerd[1635]: time="2025-10-27T08:29:16.395998952Z" level=info msg="connecting to shim bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0" address="unix:///run/containerd/s/496d8e7fdc368353dc2d8bf06fd704671880cb76dc6e4eab913d8b939682b817" protocol=ttrpc version=3 Oct 27 08:29:16.416062 systemd[1]: Started cri-containerd-bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0.scope - libcontainer container bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0. Oct 27 08:29:16.460933 containerd[1635]: time="2025-10-27T08:29:16.460891404Z" level=info msg="StartContainer for \"bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0\" returns successfully" Oct 27 08:29:16.531066 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 27 08:29:16.531686 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 27 08:29:16.671166 kubelet[2801]: I1027 08:29:16.671006 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/70c109db-2874-4e1e-8da6-346addba097a-whisker-backend-key-pair\") pod \"70c109db-2874-4e1e-8da6-346addba097a\" (UID: \"70c109db-2874-4e1e-8da6-346addba097a\") " Oct 27 08:29:16.671166 kubelet[2801]: I1027 08:29:16.671077 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncdw9\" (UniqueName: \"kubernetes.io/projected/70c109db-2874-4e1e-8da6-346addba097a-kube-api-access-ncdw9\") pod \"70c109db-2874-4e1e-8da6-346addba097a\" (UID: \"70c109db-2874-4e1e-8da6-346addba097a\") " Oct 27 08:29:16.671166 kubelet[2801]: I1027 08:29:16.671098 2801 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70c109db-2874-4e1e-8da6-346addba097a-whisker-ca-bundle\") pod \"70c109db-2874-4e1e-8da6-346addba097a\" (UID: \"70c109db-2874-4e1e-8da6-346addba097a\") " Oct 27 08:29:16.671692 kubelet[2801]: I1027 08:29:16.671565 2801 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c109db-2874-4e1e-8da6-346addba097a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "70c109db-2874-4e1e-8da6-346addba097a" (UID: "70c109db-2874-4e1e-8da6-346addba097a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 08:29:16.678282 systemd[1]: var-lib-kubelet-pods-70c109db\x2d2874\x2d4e1e\x2d8da6\x2d346addba097a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 27 08:29:16.679971 kubelet[2801]: I1027 08:29:16.679443 2801 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c109db-2874-4e1e-8da6-346addba097a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "70c109db-2874-4e1e-8da6-346addba097a" (UID: "70c109db-2874-4e1e-8da6-346addba097a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 08:29:16.682124 kubelet[2801]: I1027 08:29:16.682069 2801 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c109db-2874-4e1e-8da6-346addba097a-kube-api-access-ncdw9" (OuterVolumeSpecName: "kube-api-access-ncdw9") pod "70c109db-2874-4e1e-8da6-346addba097a" (UID: "70c109db-2874-4e1e-8da6-346addba097a"). InnerVolumeSpecName "kube-api-access-ncdw9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 08:29:16.686505 systemd[1]: var-lib-kubelet-pods-70c109db\x2d2874\x2d4e1e\x2d8da6\x2d346addba097a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dncdw9.mount: Deactivated successfully. Oct 27 08:29:16.772051 kubelet[2801]: I1027 08:29:16.771963 2801 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/70c109db-2874-4e1e-8da6-346addba097a-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 27 08:29:16.772051 kubelet[2801]: I1027 08:29:16.772000 2801 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ncdw9\" (UniqueName: \"kubernetes.io/projected/70c109db-2874-4e1e-8da6-346addba097a-kube-api-access-ncdw9\") on node \"localhost\" DevicePath \"\"" Oct 27 08:29:16.772051 kubelet[2801]: I1027 08:29:16.772012 2801 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70c109db-2874-4e1e-8da6-346addba097a-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 27 08:29:17.135731 kubelet[2801]: E1027 08:29:17.135679 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:17.141645 systemd[1]: Removed slice kubepods-besteffort-pod70c109db_2874_4e1e_8da6_346addba097a.slice - libcontainer container kubepods-besteffort-pod70c109db_2874_4e1e_8da6_346addba097a.slice. Oct 27 08:29:17.162718 kubelet[2801]: I1027 08:29:17.162644 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x6h44" podStartSLOduration=1.578173552 podStartE2EDuration="21.162626034s" podCreationTimestamp="2025-10-27 08:28:56 +0000 UTC" firstStartedPulling="2025-10-27 08:28:56.774480874 +0000 UTC m=+20.871359958" lastFinishedPulling="2025-10-27 08:29:16.358933356 +0000 UTC m=+40.455812440" observedRunningTime="2025-10-27 08:29:17.151930331 +0000 UTC m=+41.248809415" watchObservedRunningTime="2025-10-27 08:29:17.162626034 +0000 UTC m=+41.259505118" Oct 27 08:29:17.205567 systemd[1]: Created slice kubepods-besteffort-podb8851923_61d5_4c1d_bf80_827889da605a.slice - libcontainer container kubepods-besteffort-podb8851923_61d5_4c1d_bf80_827889da605a.slice. Oct 27 08:29:17.261650 containerd[1635]: time="2025-10-27T08:29:17.261603398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0\" id:\"4923ea386e76c62c1fd17d6b3ff1a7ac9631683a86b36d2365094d13b4ace47c\" pid:3969 exit_status:1 exited_at:{seconds:1761553757 nanos:261226218}" Oct 27 08:29:17.274870 kubelet[2801]: I1027 08:29:17.274810 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b8851923-61d5-4c1d-bf80-827889da605a-whisker-backend-key-pair\") pod \"whisker-cd8dbc6d4-qftjw\" (UID: \"b8851923-61d5-4c1d-bf80-827889da605a\") " pod="calico-system/whisker-cd8dbc6d4-qftjw" Oct 27 08:29:17.274870 kubelet[2801]: I1027 08:29:17.274854 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8851923-61d5-4c1d-bf80-827889da605a-whisker-ca-bundle\") pod \"whisker-cd8dbc6d4-qftjw\" (UID: \"b8851923-61d5-4c1d-bf80-827889da605a\") " pod="calico-system/whisker-cd8dbc6d4-qftjw" Oct 27 08:29:17.274870 kubelet[2801]: I1027 08:29:17.274874 2801 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9kg\" (UniqueName: \"kubernetes.io/projected/b8851923-61d5-4c1d-bf80-827889da605a-kube-api-access-mp9kg\") pod \"whisker-cd8dbc6d4-qftjw\" (UID: \"b8851923-61d5-4c1d-bf80-827889da605a\") " pod="calico-system/whisker-cd8dbc6d4-qftjw" Oct 27 08:29:17.509407 containerd[1635]: time="2025-10-27T08:29:17.509341144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd8dbc6d4-qftjw,Uid:b8851923-61d5-4c1d-bf80-827889da605a,Namespace:calico-system,Attempt:0,}" Oct 27 08:29:17.654351 systemd-networkd[1513]: cali1901bf50979: Link UP Oct 27 08:29:17.654769 systemd-networkd[1513]: cali1901bf50979: Gained carrier Oct 27 08:29:17.668432 containerd[1635]: 2025-10-27 08:29:17.533 [INFO][3984] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:29:17.668432 containerd[1635]: 2025-10-27 08:29:17.552 [INFO][3984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0 whisker-cd8dbc6d4- calico-system b8851923-61d5-4c1d-bf80-827889da605a 976 0 2025-10-27 08:29:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cd8dbc6d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-cd8dbc6d4-qftjw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1901bf50979 [] [] }} ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Namespace="calico-system" Pod="whisker-cd8dbc6d4-qftjw" WorkloadEndpoint="localhost-k8s-whisker--cd8dbc6d4--qftjw-" Oct 27 08:29:17.668432 containerd[1635]: 2025-10-27 08:29:17.552 [INFO][3984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Namespace="calico-system" Pod="whisker-cd8dbc6d4-qftjw" WorkloadEndpoint="localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0" Oct 27 08:29:17.668432 containerd[1635]: 2025-10-27 08:29:17.612 [INFO][3998] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" HandleID="k8s-pod-network.ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Workload="localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0" Oct 27 08:29:17.668777 containerd[1635]: 2025-10-27 08:29:17.613 [INFO][3998] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" HandleID="k8s-pod-network.ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Workload="localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000e3610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-cd8dbc6d4-qftjw", "timestamp":"2025-10-27 08:29:17.612554463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:29:17.668777 containerd[1635]: 2025-10-27 08:29:17.613 [INFO][3998] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:29:17.668777 containerd[1635]: 2025-10-27 08:29:17.613 [INFO][3998] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:29:17.668777 containerd[1635]: 2025-10-27 08:29:17.613 [INFO][3998] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:29:17.668777 containerd[1635]: 2025-10-27 08:29:17.621 [INFO][3998] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" host="localhost" Oct 27 08:29:17.668777 containerd[1635]: 2025-10-27 08:29:17.627 [INFO][3998] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:29:17.668777 containerd[1635]: 2025-10-27 08:29:17.631 [INFO][3998] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:29:17.668777 containerd[1635]: 2025-10-27 08:29:17.632 [INFO][3998] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:17.668777 containerd[1635]: 2025-10-27 08:29:17.634 [INFO][3998] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:17.668777 containerd[1635]: 2025-10-27 08:29:17.634 [INFO][3998] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" host="localhost" Oct 27 08:29:17.669088 containerd[1635]: 2025-10-27 08:29:17.635 [INFO][3998] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2 Oct 27 08:29:17.669088 containerd[1635]: 2025-10-27 08:29:17.639 [INFO][3998] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" host="localhost" Oct 27 08:29:17.669088 containerd[1635]: 2025-10-27 08:29:17.643 [INFO][3998] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" host="localhost" Oct 27 08:29:17.669088 containerd[1635]: 2025-10-27 08:29:17.643 [INFO][3998] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" host="localhost" Oct 27 08:29:17.669088 containerd[1635]: 2025-10-27 08:29:17.643 [INFO][3998] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:29:17.669088 containerd[1635]: 2025-10-27 08:29:17.643 [INFO][3998] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" HandleID="k8s-pod-network.ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Workload="localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0" Oct 27 08:29:17.669224 containerd[1635]: 2025-10-27 08:29:17.646 [INFO][3984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Namespace="calico-system" Pod="whisker-cd8dbc6d4-qftjw" WorkloadEndpoint="localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0", GenerateName:"whisker-cd8dbc6d4-", Namespace:"calico-system", SelfLink:"", UID:"b8851923-61d5-4c1d-bf80-827889da605a", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cd8dbc6d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-cd8dbc6d4-qftjw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1901bf50979", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:17.669224 containerd[1635]: 2025-10-27 08:29:17.647 [INFO][3984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Namespace="calico-system" Pod="whisker-cd8dbc6d4-qftjw" WorkloadEndpoint="localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0" Oct 27 08:29:17.669515 containerd[1635]: 2025-10-27 08:29:17.647 [INFO][3984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1901bf50979 ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Namespace="calico-system" Pod="whisker-cd8dbc6d4-qftjw" WorkloadEndpoint="localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0" Oct 27 08:29:17.669515 containerd[1635]: 2025-10-27 08:29:17.655 [INFO][3984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Namespace="calico-system" Pod="whisker-cd8dbc6d4-qftjw" WorkloadEndpoint="localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0" Oct 27 08:29:17.669596 containerd[1635]: 2025-10-27 08:29:17.655 [INFO][3984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Namespace="calico-system" Pod="whisker-cd8dbc6d4-qftjw" WorkloadEndpoint="localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0", GenerateName:"whisker-cd8dbc6d4-", Namespace:"calico-system", SelfLink:"", UID:"b8851923-61d5-4c1d-bf80-827889da605a", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cd8dbc6d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2", Pod:"whisker-cd8dbc6d4-qftjw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1901bf50979", MAC:"a6:3c:91:b9:d9:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:17.669672 containerd[1635]: 2025-10-27 08:29:17.665 [INFO][3984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" Namespace="calico-system" Pod="whisker-cd8dbc6d4-qftjw" WorkloadEndpoint="localhost-k8s-whisker--cd8dbc6d4--qftjw-eth0" Oct 27 08:29:17.705048 containerd[1635]: time="2025-10-27T08:29:17.704918834Z" level=info msg="connecting to shim ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2" address="unix:///run/containerd/s/d487115d036d8ea887b2cd8c212f8f07147120be059c8c2f493a20e1ae9be7b4" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:29:17.734135 systemd[1]: Started cri-containerd-ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2.scope - libcontainer container ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2. Oct 27 08:29:17.746725 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:29:17.778828 containerd[1635]: time="2025-10-27T08:29:17.777981587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd8dbc6d4-qftjw,Uid:b8851923-61d5-4c1d-bf80-827889da605a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce7923f6a99d0671927b5f22e396e4d1666d5ddea366e7c1bfcabae1e483a7b2\"" Oct 27 08:29:17.783249 containerd[1635]: time="2025-10-27T08:29:17.782651252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:29:18.010973 containerd[1635]: time="2025-10-27T08:29:18.010103682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b5b6f478-9qcvc,Uid:58e52a6b-4ff7-410b-ae51-43b90491a215,Namespace:calico-system,Attempt:0,}" Oct 27 08:29:18.012068 containerd[1635]: time="2025-10-27T08:29:18.012037453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c64f56595-lxn6t,Uid:9f02b196-86ab-47f3-85f7-4a69adbdcd03,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:29:18.012163 containerd[1635]: time="2025-10-27T08:29:18.012139245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6585l,Uid:c11176f9-c15d-4aff-9a2f-9db19f9df938,Namespace:calico-system,Attempt:0,}" Oct 27 08:29:18.013750 kubelet[2801]: I1027 08:29:18.013199 2801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c109db-2874-4e1e-8da6-346addba097a" path="/var/lib/kubelet/pods/70c109db-2874-4e1e-8da6-346addba097a/volumes" Oct 27 08:29:18.132559 containerd[1635]: time="2025-10-27T08:29:18.132344386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:18.138885 kubelet[2801]: E1027 08:29:18.138830 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:18.434605 systemd-networkd[1513]: vxlan.calico: Link UP Oct 27 08:29:18.434616 systemd-networkd[1513]: vxlan.calico: Gained carrier Oct 27 08:29:18.487342 containerd[1635]: time="2025-10-27T08:29:18.487235865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:29:18.487342 containerd[1635]: time="2025-10-27T08:29:18.487330823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:29:18.487960 kubelet[2801]: E1027 08:29:18.487732 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:29:18.487960 kubelet[2801]: E1027 08:29:18.487792 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:29:18.495571 kubelet[2801]: E1027 08:29:18.495501 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:59d5126a211343c59ed5040c1e1811de,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp9kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd8dbc6d4-qftjw_calico-system(b8851923-61d5-4c1d-bf80-827889da605a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:18.498480 containerd[1635]: time="2025-10-27T08:29:18.498443883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0\" id:\"e475a7f3308a7eaad723107cf946f8a3745a60f8bee14f85f7c7b4b2950824b6\" pid:4212 exit_status:1 exited_at:{seconds:1761553758 nanos:495615278}" Oct 27 08:29:18.500359 containerd[1635]: time="2025-10-27T08:29:18.500109593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:29:18.629332 systemd-networkd[1513]: cali2051796cea9: Link UP Oct 27 08:29:18.629537 systemd-networkd[1513]: cali2051796cea9: Gained carrier Oct 27 08:29:18.646880 containerd[1635]: 2025-10-27 08:29:18.537 [INFO][4249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0 calico-kube-controllers-9b5b6f478- calico-system 58e52a6b-4ff7-410b-ae51-43b90491a215 902 0 2025-10-27 08:28:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9b5b6f478 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9b5b6f478-9qcvc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2051796cea9 [] [] }} ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Namespace="calico-system" Pod="calico-kube-controllers-9b5b6f478-9qcvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-" Oct 27 08:29:18.646880 containerd[1635]: 2025-10-27 08:29:18.537 [INFO][4249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Namespace="calico-system" Pod="calico-kube-controllers-9b5b6f478-9qcvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0" Oct 27 08:29:18.646880 containerd[1635]: 2025-10-27 08:29:18.580 [INFO][4289] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" HandleID="k8s-pod-network.a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Workload="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0" Oct 27 08:29:18.647431 containerd[1635]: 2025-10-27 08:29:18.580 [INFO][4289] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" HandleID="k8s-pod-network.a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Workload="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9b5b6f478-9qcvc", "timestamp":"2025-10-27 08:29:18.580266192 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:29:18.647431 containerd[1635]: 2025-10-27 08:29:18.580 [INFO][4289] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:29:18.647431 containerd[1635]: 2025-10-27 08:29:18.580 [INFO][4289] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:29:18.647431 containerd[1635]: 2025-10-27 08:29:18.580 [INFO][4289] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:29:18.647431 containerd[1635]: 2025-10-27 08:29:18.594 [INFO][4289] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" host="localhost" Oct 27 08:29:18.647431 containerd[1635]: 2025-10-27 08:29:18.598 [INFO][4289] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:29:18.647431 containerd[1635]: 2025-10-27 08:29:18.604 [INFO][4289] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:29:18.647431 containerd[1635]: 2025-10-27 08:29:18.606 [INFO][4289] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:18.647431 containerd[1635]: 2025-10-27 08:29:18.608 [INFO][4289] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:18.647431 containerd[1635]: 2025-10-27 08:29:18.608 [INFO][4289] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" host="localhost" Oct 27 08:29:18.647667 containerd[1635]: 2025-10-27 08:29:18.611 [INFO][4289] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2 Oct 27 08:29:18.647667 containerd[1635]: 2025-10-27 08:29:18.615 [INFO][4289] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" host="localhost" Oct 27 08:29:18.647667 containerd[1635]: 2025-10-27 08:29:18.620 [INFO][4289] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" host="localhost" Oct 27 08:29:18.647667 containerd[1635]: 2025-10-27 08:29:18.621 [INFO][4289] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" host="localhost" Oct 27 08:29:18.647667 containerd[1635]: 2025-10-27 08:29:18.621 [INFO][4289] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:29:18.647667 containerd[1635]: 2025-10-27 08:29:18.621 [INFO][4289] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" HandleID="k8s-pod-network.a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Workload="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0" Oct 27 08:29:18.647782 containerd[1635]: 2025-10-27 08:29:18.625 [INFO][4249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Namespace="calico-system" Pod="calico-kube-controllers-9b5b6f478-9qcvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0", GenerateName:"calico-kube-controllers-9b5b6f478-", Namespace:"calico-system", SelfLink:"", UID:"58e52a6b-4ff7-410b-ae51-43b90491a215", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b5b6f478", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9b5b6f478-9qcvc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2051796cea9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:18.647837 containerd[1635]: 2025-10-27 08:29:18.625 [INFO][4249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Namespace="calico-system" Pod="calico-kube-controllers-9b5b6f478-9qcvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0" Oct 27 08:29:18.647837 containerd[1635]: 2025-10-27 08:29:18.625 [INFO][4249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2051796cea9 ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Namespace="calico-system" Pod="calico-kube-controllers-9b5b6f478-9qcvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0" Oct 27 08:29:18.647837 containerd[1635]: 2025-10-27 08:29:18.630 [INFO][4249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Namespace="calico-system" Pod="calico-kube-controllers-9b5b6f478-9qcvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0" Oct 27 08:29:18.648002 containerd[1635]: 2025-10-27 08:29:18.630 [INFO][4249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Namespace="calico-system" Pod="calico-kube-controllers-9b5b6f478-9qcvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0", GenerateName:"calico-kube-controllers-9b5b6f478-", Namespace:"calico-system", SelfLink:"", UID:"58e52a6b-4ff7-410b-ae51-43b90491a215", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b5b6f478", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2", Pod:"calico-kube-controllers-9b5b6f478-9qcvc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2051796cea9", MAC:"26:72:81:3d:7f:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:18.648054 containerd[1635]: 2025-10-27 08:29:18.641 [INFO][4249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" Namespace="calico-system" Pod="calico-kube-controllers-9b5b6f478-9qcvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b5b6f478--9qcvc-eth0" Oct 27 08:29:18.676255 containerd[1635]: time="2025-10-27T08:29:18.676194682Z" level=info msg="connecting to shim a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2" address="unix:///run/containerd/s/da65ad2bf80d03b63285f658b5e7c39d14962fbc4e91fc979447080139bd536a" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:29:18.716082 systemd[1]: Started cri-containerd-a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2.scope - libcontainer container a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2. Oct 27 08:29:18.730447 systemd-networkd[1513]: cali2043980c863: Link UP Oct 27 08:29:18.731720 systemd-networkd[1513]: cali2043980c863: Gained carrier Oct 27 08:29:18.747581 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:29:18.748895 containerd[1635]: 2025-10-27 08:29:18.566 [INFO][4255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0 calico-apiserver-c64f56595- calico-apiserver 9f02b196-86ab-47f3-85f7-4a69adbdcd03 903 0 2025-10-27 08:28:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c64f56595 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c64f56595-lxn6t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2043980c863 [] [] }} ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lxn6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lxn6t-" Oct 27 08:29:18.748895 containerd[1635]: 2025-10-27 08:29:18.567 [INFO][4255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lxn6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0" Oct 27 08:29:18.748895 containerd[1635]: 2025-10-27 08:29:18.613 [INFO][4302] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" HandleID="k8s-pod-network.5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Workload="localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0" Oct 27 08:29:18.749166 containerd[1635]: 2025-10-27 08:29:18.613 [INFO][4302] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" HandleID="k8s-pod-network.5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Workload="localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c64f56595-lxn6t", "timestamp":"2025-10-27 08:29:18.613705759 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:29:18.749166 containerd[1635]: 2025-10-27 08:29:18.613 [INFO][4302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:29:18.749166 containerd[1635]: 2025-10-27 08:29:18.621 [INFO][4302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:29:18.749166 containerd[1635]: 2025-10-27 08:29:18.621 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:29:18.749166 containerd[1635]: 2025-10-27 08:29:18.690 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" host="localhost" Oct 27 08:29:18.749166 containerd[1635]: 2025-10-27 08:29:18.701 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:29:18.749166 containerd[1635]: 2025-10-27 08:29:18.705 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:29:18.749166 containerd[1635]: 2025-10-27 08:29:18.707 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:18.749166 containerd[1635]: 2025-10-27 08:29:18.709 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:18.749166 containerd[1635]: 2025-10-27 08:29:18.709 [INFO][4302] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" host="localhost" Oct 27 08:29:18.749399 containerd[1635]: 2025-10-27 08:29:18.710 [INFO][4302] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf Oct 27 08:29:18.749399 containerd[1635]: 2025-10-27 08:29:18.715 [INFO][4302] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" host="localhost" Oct 27 08:29:18.749399 containerd[1635]: 2025-10-27 08:29:18.723 [INFO][4302] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" host="localhost" Oct 27 08:29:18.749399 containerd[1635]: 2025-10-27 08:29:18.723 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" host="localhost" Oct 27 08:29:18.749399 containerd[1635]: 2025-10-27 08:29:18.723 [INFO][4302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:29:18.749399 containerd[1635]: 2025-10-27 08:29:18.723 [INFO][4302] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" HandleID="k8s-pod-network.5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Workload="localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0" Oct 27 08:29:18.749553 containerd[1635]: 2025-10-27 08:29:18.727 [INFO][4255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lxn6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0", GenerateName:"calico-apiserver-c64f56595-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f02b196-86ab-47f3-85f7-4a69adbdcd03", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c64f56595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c64f56595-lxn6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2043980c863", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:18.749610 containerd[1635]: 2025-10-27 08:29:18.727 [INFO][4255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lxn6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0" Oct 27 08:29:18.749610 containerd[1635]: 2025-10-27 08:29:18.727 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2043980c863 ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lxn6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0" Oct 27 08:29:18.749610 containerd[1635]: 2025-10-27 08:29:18.732 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lxn6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0" Oct 27 08:29:18.749674 containerd[1635]: 2025-10-27 08:29:18.732 [INFO][4255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lxn6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0", GenerateName:"calico-apiserver-c64f56595-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f02b196-86ab-47f3-85f7-4a69adbdcd03", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c64f56595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf", Pod:"calico-apiserver-c64f56595-lxn6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2043980c863", MAC:"7e:38:a4:2b:9a:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:18.749727 containerd[1635]: 2025-10-27 08:29:18.744 [INFO][4255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lxn6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lxn6t-eth0" Oct 27 08:29:18.777148 containerd[1635]: time="2025-10-27T08:29:18.777022209Z" level=info msg="connecting to shim 5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf" address="unix:///run/containerd/s/bcf47521f2a5d4f2423865f76ebb4a87bbe7043ad58a6b212ed3c6f7b88f3fad" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:29:18.801927 containerd[1635]: time="2025-10-27T08:29:18.801872095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b5b6f478-9qcvc,Uid:58e52a6b-4ff7-410b-ae51-43b90491a215,Namespace:calico-system,Attempt:0,} returns sandbox id \"a26b57f861134766885a519a12a85f87aa05b6084f9673a81a4d664bb3f3f3c2\"" Oct 27 08:29:18.818203 systemd[1]: Started cri-containerd-5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf.scope - libcontainer container 5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf. Oct 27 08:29:18.839608 systemd-networkd[1513]: calibbded941527: Link UP Oct 27 08:29:18.840816 systemd-networkd[1513]: calibbded941527: Gained carrier Oct 27 08:29:18.845323 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:29:18.861998 containerd[1635]: 2025-10-27 08:29:18.570 [INFO][4274] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6585l-eth0 csi-node-driver- calico-system c11176f9-c15d-4aff-9a2f-9db19f9df938 781 0 2025-10-27 08:28:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6585l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibbded941527 [] [] }} ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Namespace="calico-system" Pod="csi-node-driver-6585l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6585l-" Oct 27 08:29:18.861998 containerd[1635]: 2025-10-27 08:29:18.571 [INFO][4274] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Namespace="calico-system" Pod="csi-node-driver-6585l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6585l-eth0" Oct 27 08:29:18.861998 containerd[1635]: 2025-10-27 08:29:18.626 [INFO][4309] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" HandleID="k8s-pod-network.42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Workload="localhost-k8s-csi--node--driver--6585l-eth0" Oct 27 08:29:18.862321 containerd[1635]: 2025-10-27 08:29:18.630 [INFO][4309] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" HandleID="k8s-pod-network.42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Workload="localhost-k8s-csi--node--driver--6585l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6585l", "timestamp":"2025-10-27 08:29:18.626928809 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:29:18.862321 containerd[1635]: 2025-10-27 08:29:18.630 [INFO][4309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:29:18.862321 containerd[1635]: 2025-10-27 08:29:18.723 [INFO][4309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:29:18.862321 containerd[1635]: 2025-10-27 08:29:18.724 [INFO][4309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:29:18.862321 containerd[1635]: 2025-10-27 08:29:18.792 [INFO][4309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" host="localhost" Oct 27 08:29:18.862321 containerd[1635]: 2025-10-27 08:29:18.805 [INFO][4309] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:29:18.862321 containerd[1635]: 2025-10-27 08:29:18.809 [INFO][4309] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:29:18.862321 containerd[1635]: 2025-10-27 08:29:18.812 [INFO][4309] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:18.862321 containerd[1635]: 2025-10-27 08:29:18.814 [INFO][4309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:18.862321 containerd[1635]: 2025-10-27 08:29:18.814 [INFO][4309] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" host="localhost" Oct 27 08:29:18.862657 containerd[1635]: 2025-10-27 08:29:18.817 [INFO][4309] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467 Oct 27 08:29:18.862657 containerd[1635]: 2025-10-27 08:29:18.823 [INFO][4309] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" host="localhost" Oct 27 08:29:18.862657 containerd[1635]: 2025-10-27 08:29:18.830 [INFO][4309] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" host="localhost" Oct 27 08:29:18.862657 containerd[1635]: 2025-10-27 08:29:18.831 [INFO][4309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" host="localhost" Oct 27 08:29:18.862657 containerd[1635]: 2025-10-27 08:29:18.831 [INFO][4309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:29:18.862657 containerd[1635]: 2025-10-27 08:29:18.831 [INFO][4309] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" HandleID="k8s-pod-network.42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Workload="localhost-k8s-csi--node--driver--6585l-eth0" Oct 27 08:29:18.862917 containerd[1635]: 2025-10-27 08:29:18.835 [INFO][4274] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Namespace="calico-system" Pod="csi-node-driver-6585l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6585l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6585l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c11176f9-c15d-4aff-9a2f-9db19f9df938", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6585l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbded941527", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:18.863084 containerd[1635]: 2025-10-27 08:29:18.835 [INFO][4274] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Namespace="calico-system" Pod="csi-node-driver-6585l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6585l-eth0" Oct 27 08:29:18.863084 containerd[1635]: 2025-10-27 08:29:18.835 [INFO][4274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbded941527 ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Namespace="calico-system" Pod="csi-node-driver-6585l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6585l-eth0" Oct 27 08:29:18.863084 containerd[1635]: 2025-10-27 08:29:18.842 [INFO][4274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Namespace="calico-system" Pod="csi-node-driver-6585l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6585l-eth0" Oct 27 08:29:18.863178 containerd[1635]: 2025-10-27 08:29:18.843 [INFO][4274] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Namespace="calico-system" Pod="csi-node-driver-6585l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6585l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6585l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c11176f9-c15d-4aff-9a2f-9db19f9df938", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467", Pod:"csi-node-driver-6585l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbded941527", MAC:"a6:d0:88:9d:45:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:18.863237 containerd[1635]: 2025-10-27 08:29:18.858 [INFO][4274] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" Namespace="calico-system" Pod="csi-node-driver-6585l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6585l-eth0" Oct 27 08:29:18.884286 containerd[1635]: time="2025-10-27T08:29:18.884233373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:18.889012 containerd[1635]: time="2025-10-27T08:29:18.888970018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:29:18.889074 containerd[1635]: time="2025-10-27T08:29:18.889046901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:29:18.889211 kubelet[2801]: E1027 08:29:18.889166 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:29:18.889306 kubelet[2801]: E1027 08:29:18.889222 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:29:18.889469 kubelet[2801]: E1027 08:29:18.889400 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mp9kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd8dbc6d4-qftjw_calico-system(b8851923-61d5-4c1d-bf80-827889da605a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:18.890689 containerd[1635]: time="2025-10-27T08:29:18.890621881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:29:18.891040 kubelet[2801]: E1027 08:29:18.890831 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd8dbc6d4-qftjw" podUID="b8851923-61d5-4c1d-bf80-827889da605a" Oct 27 08:29:18.891522 containerd[1635]: time="2025-10-27T08:29:18.891479190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c64f56595-lxn6t,Uid:9f02b196-86ab-47f3-85f7-4a69adbdcd03,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5c35721d705474afdc8a191fa2a3cd39af2e85feebd2ac7c55d186dff74534cf\"" Oct 27 08:29:18.907901 containerd[1635]: time="2025-10-27T08:29:18.907797121Z" level=info msg="connecting to shim 42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467" address="unix:///run/containerd/s/5b65456f28f7b4a8327b1159b095d41e860d57ea03c0a8fac7cf6e7111617458" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:29:18.940321 systemd[1]: Started cri-containerd-42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467.scope - libcontainer container 42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467. Oct 27 08:29:18.958484 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:29:18.974158 containerd[1635]: time="2025-10-27T08:29:18.974063440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6585l,Uid:c11176f9-c15d-4aff-9a2f-9db19f9df938,Namespace:calico-system,Attempt:0,} returns sandbox id \"42a33b9bf42f6bfac0636e0fbc4f114b73442ae5e1adb7a6bcfe4a5edce9b467\"" Oct 27 08:29:19.009362 kubelet[2801]: E1027 08:29:19.009310 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:19.009624 containerd[1635]: time="2025-10-27T08:29:19.009596269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vqr2m,Uid:8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2,Namespace:calico-system,Attempt:0,}" Oct 27 08:29:19.009807 containerd[1635]: time="2025-10-27T08:29:19.009762809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-86jt5,Uid:3e3fd7e5-9a69-4009-b8ff-395503b2b2e3,Namespace:kube-system,Attempt:0,}" Oct 27 08:29:19.114041 systemd-networkd[1513]: cali6841205ca3b: Link UP Oct 27 08:29:19.115063 systemd-networkd[1513]: cali6841205ca3b: Gained carrier Oct 27 08:29:19.131153 containerd[1635]: 2025-10-27 08:29:19.049 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--86jt5-eth0 coredns-674b8bbfcf- kube-system 3e3fd7e5-9a69-4009-b8ff-395503b2b2e3 897 0 2025-10-27 08:28:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-86jt5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6841205ca3b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-86jt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--86jt5-" Oct 27 08:29:19.131153 containerd[1635]: 2025-10-27 08:29:19.049 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-86jt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--86jt5-eth0" Oct 27 08:29:19.131153 containerd[1635]: 2025-10-27 08:29:19.077 [INFO][4547] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" HandleID="k8s-pod-network.541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Workload="localhost-k8s-coredns--674b8bbfcf--86jt5-eth0" Oct 27 08:29:19.131380 containerd[1635]: 2025-10-27 08:29:19.077 [INFO][4547] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" HandleID="k8s-pod-network.541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Workload="localhost-k8s-coredns--674b8bbfcf--86jt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-86jt5", "timestamp":"2025-10-27 08:29:19.077068941 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:29:19.131380 containerd[1635]: 2025-10-27 08:29:19.077 [INFO][4547] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:29:19.131380 containerd[1635]: 2025-10-27 08:29:19.077 [INFO][4547] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:29:19.131380 containerd[1635]: 2025-10-27 08:29:19.077 [INFO][4547] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:29:19.131380 containerd[1635]: 2025-10-27 08:29:19.083 [INFO][4547] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" host="localhost" Oct 27 08:29:19.131380 containerd[1635]: 2025-10-27 08:29:19.087 [INFO][4547] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:29:19.131380 containerd[1635]: 2025-10-27 08:29:19.091 [INFO][4547] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:29:19.131380 containerd[1635]: 2025-10-27 08:29:19.094 [INFO][4547] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:19.131380 containerd[1635]: 2025-10-27 08:29:19.097 [INFO][4547] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:19.131380 containerd[1635]: 2025-10-27 08:29:19.097 [INFO][4547] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" host="localhost" Oct 27 08:29:19.131597 containerd[1635]: 2025-10-27 08:29:19.098 [INFO][4547] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd Oct 27 08:29:19.131597 containerd[1635]: 2025-10-27 08:29:19.101 [INFO][4547] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" host="localhost" Oct 27 08:29:19.131597 containerd[1635]: 2025-10-27 08:29:19.107 [INFO][4547] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" host="localhost" Oct 27 08:29:19.131597 containerd[1635]: 2025-10-27 08:29:19.107 [INFO][4547] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" host="localhost" Oct 27 08:29:19.131597 containerd[1635]: 2025-10-27 08:29:19.107 [INFO][4547] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:29:19.131597 containerd[1635]: 2025-10-27 08:29:19.107 [INFO][4547] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" HandleID="k8s-pod-network.541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Workload="localhost-k8s-coredns--674b8bbfcf--86jt5-eth0" Oct 27 08:29:19.131774 containerd[1635]: 2025-10-27 08:29:19.111 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-86jt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--86jt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--86jt5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3e3fd7e5-9a69-4009-b8ff-395503b2b2e3", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-86jt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6841205ca3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:19.131841 containerd[1635]: 2025-10-27 08:29:19.111 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-86jt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--86jt5-eth0" Oct 27 08:29:19.131841 containerd[1635]: 2025-10-27 08:29:19.111 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6841205ca3b ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-86jt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--86jt5-eth0" Oct 27 08:29:19.131841 containerd[1635]: 2025-10-27 08:29:19.115 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-86jt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--86jt5-eth0" Oct 27 08:29:19.131910 containerd[1635]: 2025-10-27 08:29:19.115 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-86jt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--86jt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--86jt5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3e3fd7e5-9a69-4009-b8ff-395503b2b2e3", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd", Pod:"coredns-674b8bbfcf-86jt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6841205ca3b", MAC:"d2:5c:44:34:c6:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:19.131910 containerd[1635]: 2025-10-27 08:29:19.126 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-86jt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--86jt5-eth0" Oct 27 08:29:19.143492 kubelet[2801]: E1027 08:29:19.143423 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:19.146911 kubelet[2801]: E1027 08:29:19.146448 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd8dbc6d4-qftjw" podUID="b8851923-61d5-4c1d-bf80-827889da605a" Oct 27 08:29:19.161786 containerd[1635]: time="2025-10-27T08:29:19.161719149Z" level=info msg="connecting to shim 541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd" address="unix:///run/containerd/s/8011f962731650645a149d1d7ae360ae2ffdae6626790cc9bde4e04758533ffb" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:29:19.191141 systemd[1]: Started cri-containerd-541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd.scope - libcontainer container 541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd. Oct 27 08:29:19.212496 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:29:19.251497 systemd-networkd[1513]: cali39971db3264: Link UP Oct 27 08:29:19.255258 systemd-networkd[1513]: cali39971db3264: Gained carrier Oct 27 08:29:19.255429 containerd[1635]: time="2025-10-27T08:29:19.255307563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:19.255825 containerd[1635]: time="2025-10-27T08:29:19.255797924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-86jt5,Uid:3e3fd7e5-9a69-4009-b8ff-395503b2b2e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd\"" Oct 27 08:29:19.257239 kubelet[2801]: E1027 08:29:19.257187 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:19.257370 containerd[1635]: time="2025-10-27T08:29:19.257164838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:29:19.257370 containerd[1635]: time="2025-10-27T08:29:19.257293853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:29:19.257745 kubelet[2801]: E1027 08:29:19.257458 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:29:19.257745 kubelet[2801]: E1027 08:29:19.257543 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:29:19.257892 kubelet[2801]: E1027 08:29:19.257808 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n9pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-9b5b6f478-9qcvc_calico-system(58e52a6b-4ff7-410b-ae51-43b90491a215): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:19.258217 containerd[1635]: time="2025-10-27T08:29:19.258189676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:29:19.259058 kubelet[2801]: E1027 08:29:19.259017 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" podUID="58e52a6b-4ff7-410b-ae51-43b90491a215" Oct 27 08:29:19.262814 containerd[1635]: time="2025-10-27T08:29:19.262721752Z" level=info msg="CreateContainer within sandbox \"541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:29:19.281966 containerd[1635]: time="2025-10-27T08:29:19.281446291Z" level=info msg="Container ff9f26760ae4baca7d46db58e6f510ba60bf5bbac59f1df303b1953aa3e8cd6b: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.047 [INFO][4517] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--vqr2m-eth0 goldmane-666569f655- calico-system 8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2 898 0 2025-10-27 08:28:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-vqr2m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali39971db3264 [] [] }} ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Namespace="calico-system" Pod="goldmane-666569f655-vqr2m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vqr2m-" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.047 [INFO][4517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Namespace="calico-system" Pod="goldmane-666569f655-vqr2m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vqr2m-eth0" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.082 [INFO][4545] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" HandleID="k8s-pod-network.835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Workload="localhost-k8s-goldmane--666569f655--vqr2m-eth0" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.083 [INFO][4545] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" HandleID="k8s-pod-network.835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Workload="localhost-k8s-goldmane--666569f655--vqr2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-vqr2m", "timestamp":"2025-10-27 08:29:19.082848425 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.083 [INFO][4545] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.107 [INFO][4545] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.107 [INFO][4545] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.190 [INFO][4545] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" host="localhost" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.196 [INFO][4545] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.205 [INFO][4545] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.209 [INFO][4545] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.216 [INFO][4545] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.216 [INFO][4545] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" host="localhost" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.220 [INFO][4545] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.224 [INFO][4545] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" host="localhost" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.232 [INFO][4545] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" host="localhost" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.232 [INFO][4545] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" host="localhost" Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.232 [INFO][4545] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:29:19.282406 containerd[1635]: 2025-10-27 08:29:19.232 [INFO][4545] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" HandleID="k8s-pod-network.835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Workload="localhost-k8s-goldmane--666569f655--vqr2m-eth0" Oct 27 08:29:19.282842 containerd[1635]: 2025-10-27 08:29:19.243 [INFO][4517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Namespace="calico-system" Pod="goldmane-666569f655-vqr2m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vqr2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vqr2m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-vqr2m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39971db3264", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:19.282842 containerd[1635]: 2025-10-27 08:29:19.246 [INFO][4517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Namespace="calico-system" Pod="goldmane-666569f655-vqr2m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vqr2m-eth0" Oct 27 08:29:19.282842 containerd[1635]: 2025-10-27 08:29:19.246 [INFO][4517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39971db3264 ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Namespace="calico-system" Pod="goldmane-666569f655-vqr2m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vqr2m-eth0" Oct 27 08:29:19.282842 containerd[1635]: 2025-10-27 08:29:19.261 [INFO][4517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Namespace="calico-system" Pod="goldmane-666569f655-vqr2m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vqr2m-eth0" Oct 27 08:29:19.282842 containerd[1635]: 2025-10-27 08:29:19.262 [INFO][4517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Namespace="calico-system" Pod="goldmane-666569f655-vqr2m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vqr2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vqr2m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b", Pod:"goldmane-666569f655-vqr2m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39971db3264", MAC:"12:e6:9a:dc:95:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:19.282842 containerd[1635]: 2025-10-27 08:29:19.277 [INFO][4517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" Namespace="calico-system" Pod="goldmane-666569f655-vqr2m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vqr2m-eth0" Oct 27 08:29:19.291399 containerd[1635]: time="2025-10-27T08:29:19.291373723Z" level=info msg="CreateContainer within sandbox \"541e666eece0157f6e5936cfed2e360e29ec02759b46eae0e55cbf5fac1343dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff9f26760ae4baca7d46db58e6f510ba60bf5bbac59f1df303b1953aa3e8cd6b\"" Oct 27 08:29:19.291891 containerd[1635]: time="2025-10-27T08:29:19.291524161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0\" id:\"bdb9dd2e4183c35bc3eb48b292d99f7f16dbda8f73cae36f39c38a9845724869\" pid:4597 exit_status:1 exited_at:{seconds:1761553759 nanos:291174560}" Oct 27 08:29:19.291891 containerd[1635]: time="2025-10-27T08:29:19.291832782Z" level=info msg="StartContainer for \"ff9f26760ae4baca7d46db58e6f510ba60bf5bbac59f1df303b1953aa3e8cd6b\"" Oct 27 08:29:19.292716 containerd[1635]: time="2025-10-27T08:29:19.292692232Z" level=info msg="connecting to shim ff9f26760ae4baca7d46db58e6f510ba60bf5bbac59f1df303b1953aa3e8cd6b" address="unix:///run/containerd/s/8011f962731650645a149d1d7ae360ae2ffdae6626790cc9bde4e04758533ffb" protocol=ttrpc version=3 Oct 27 08:29:19.311806 containerd[1635]: time="2025-10-27T08:29:19.311752384Z" level=info msg="connecting to shim 835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b" address="unix:///run/containerd/s/1645f9e644379f15a46457d73cb706cfd9afe4e76824372f36f2b728e3c7cd73" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:29:19.322078 systemd[1]: Started cri-containerd-ff9f26760ae4baca7d46db58e6f510ba60bf5bbac59f1df303b1953aa3e8cd6b.scope - libcontainer container ff9f26760ae4baca7d46db58e6f510ba60bf5bbac59f1df303b1953aa3e8cd6b. Oct 27 08:29:19.346098 systemd[1]: Started cri-containerd-835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b.scope - libcontainer container 835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b. Oct 27 08:29:19.361623 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:29:19.372966 containerd[1635]: time="2025-10-27T08:29:19.372891995Z" level=info msg="StartContainer for \"ff9f26760ae4baca7d46db58e6f510ba60bf5bbac59f1df303b1953aa3e8cd6b\" returns successfully" Oct 27 08:29:19.395339 containerd[1635]: time="2025-10-27T08:29:19.395258570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vqr2m,Uid:8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"835341286a2c448dba9201f5177b99934c7fbf1e2f0360820fb06474e532e08b\"" Oct 27 08:29:19.504158 systemd-networkd[1513]: cali1901bf50979: Gained IPv6LL Oct 27 08:29:19.626662 containerd[1635]: time="2025-10-27T08:29:19.626588809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:19.634821 containerd[1635]: time="2025-10-27T08:29:19.634732904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:29:19.635002 containerd[1635]: time="2025-10-27T08:29:19.634824705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:29:19.635102 kubelet[2801]: E1027 08:29:19.635059 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:29:19.635190 kubelet[2801]: E1027 08:29:19.635114 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:29:19.635433 kubelet[2801]: E1027 08:29:19.635368 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkzjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c64f56595-lxn6t_calico-apiserver(9f02b196-86ab-47f3-85f7-4a69adbdcd03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:19.635743 containerd[1635]: time="2025-10-27T08:29:19.635409372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:29:19.637024 kubelet[2801]: E1027 08:29:19.636967 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" podUID="9f02b196-86ab-47f3-85f7-4a69adbdcd03" Oct 27 08:29:19.696136 systemd-networkd[1513]: vxlan.calico: Gained IPv6LL Oct 27 08:29:19.952182 systemd-networkd[1513]: calibbded941527: Gained IPv6LL Oct 27 08:29:20.157716 kubelet[2801]: E1027 08:29:20.157685 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:20.158393 kubelet[2801]: E1027 08:29:20.157820 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" podUID="58e52a6b-4ff7-410b-ae51-43b90491a215" Oct 27 08:29:20.158393 kubelet[2801]: E1027 08:29:20.158174 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" podUID="9f02b196-86ab-47f3-85f7-4a69adbdcd03" Oct 27 08:29:20.177566 kubelet[2801]: I1027 08:29:20.177472 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-86jt5" podStartSLOduration=40.177454549 podStartE2EDuration="40.177454549s" podCreationTimestamp="2025-10-27 08:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:29:20.175834187 +0000 UTC m=+44.272713271" watchObservedRunningTime="2025-10-27 08:29:20.177454549 +0000 UTC m=+44.274333633" Oct 27 08:29:20.656138 systemd-networkd[1513]: cali2051796cea9: Gained IPv6LL Oct 27 08:29:20.720180 systemd-networkd[1513]: cali2043980c863: Gained IPv6LL Oct 27 08:29:20.958191 containerd[1635]: time="2025-10-27T08:29:20.958041827Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:20.960962 containerd[1635]: time="2025-10-27T08:29:20.960895505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:29:20.961031 containerd[1635]: time="2025-10-27T08:29:20.960975714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:29:20.961215 kubelet[2801]: E1027 08:29:20.961165 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:29:20.961327 kubelet[2801]: E1027 08:29:20.961229 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:29:20.961513 kubelet[2801]: E1027 08:29:20.961466 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prh88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6585l_calico-system(c11176f9-c15d-4aff-9a2f-9db19f9df938): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:20.961637 containerd[1635]: time="2025-10-27T08:29:20.961530069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:29:21.008960 kubelet[2801]: E1027 08:29:21.008909 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:21.009380 containerd[1635]: time="2025-10-27T08:29:21.009327542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4sbmc,Uid:fe014e01-beff-4bf8-ab4a-6b42d97b928c,Namespace:kube-system,Attempt:0,}" Oct 27 08:29:21.041772 systemd-networkd[1513]: cali6841205ca3b: Gained IPv6LL Oct 27 08:29:21.105157 systemd-networkd[1513]: cali39971db3264: Gained IPv6LL Oct 27 08:29:21.119990 systemd-networkd[1513]: cali6902f345245: Link UP Oct 27 08:29:21.120268 systemd-networkd[1513]: cali6902f345245: Gained carrier Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.049 [INFO][4740] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0 coredns-674b8bbfcf- kube-system fe014e01-beff-4bf8-ab4a-6b42d97b928c 904 0 2025-10-27 08:28:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-4sbmc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6902f345245 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Namespace="kube-system" Pod="coredns-674b8bbfcf-4sbmc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4sbmc-" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.049 [INFO][4740] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Namespace="kube-system" Pod="coredns-674b8bbfcf-4sbmc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.075 [INFO][4754] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" HandleID="k8s-pod-network.272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Workload="localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.075 [INFO][4754] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" HandleID="k8s-pod-network.272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Workload="localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-4sbmc", "timestamp":"2025-10-27 08:29:21.075799142 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.076 [INFO][4754] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.076 [INFO][4754] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.076 [INFO][4754] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.082 [INFO][4754] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" host="localhost" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.090 [INFO][4754] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.096 [INFO][4754] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.097 [INFO][4754] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.100 [INFO][4754] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.100 [INFO][4754] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" host="localhost" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.102 [INFO][4754] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.106 [INFO][4754] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" host="localhost" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.113 [INFO][4754] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" host="localhost" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.113 [INFO][4754] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" host="localhost" Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.113 [INFO][4754] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:29:21.136263 containerd[1635]: 2025-10-27 08:29:21.113 [INFO][4754] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" HandleID="k8s-pod-network.272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Workload="localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0" Oct 27 08:29:21.137072 containerd[1635]: 2025-10-27 08:29:21.116 [INFO][4740] cni-plugin/k8s.go 418: Populated endpoint ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Namespace="kube-system" Pod="coredns-674b8bbfcf-4sbmc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe014e01-beff-4bf8-ab4a-6b42d97b928c", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-4sbmc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6902f345245", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:21.137072 containerd[1635]: 2025-10-27 08:29:21.117 [INFO][4740] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Namespace="kube-system" Pod="coredns-674b8bbfcf-4sbmc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0" Oct 27 08:29:21.137072 containerd[1635]: 2025-10-27 08:29:21.117 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6902f345245 ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Namespace="kube-system" Pod="coredns-674b8bbfcf-4sbmc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0" Oct 27 08:29:21.137072 containerd[1635]: 2025-10-27 08:29:21.119 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Namespace="kube-system" Pod="coredns-674b8bbfcf-4sbmc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0" Oct 27 08:29:21.137072 containerd[1635]: 2025-10-27 08:29:21.121 [INFO][4740] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Namespace="kube-system" Pod="coredns-674b8bbfcf-4sbmc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe014e01-beff-4bf8-ab4a-6b42d97b928c", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e", Pod:"coredns-674b8bbfcf-4sbmc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6902f345245", MAC:"b6:05:50:40:9f:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:21.137072 containerd[1635]: 2025-10-27 08:29:21.131 [INFO][4740] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" Namespace="kube-system" Pod="coredns-674b8bbfcf-4sbmc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4sbmc-eth0" Oct 27 08:29:21.160678 kubelet[2801]: E1027 08:29:21.160136 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:21.164038 containerd[1635]: time="2025-10-27T08:29:21.163978779Z" level=info msg="connecting to shim 272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e" address="unix:///run/containerd/s/7a260f1122dba71d4a61f0cfb22d858eb8e2eeede40794ef74e6533ee5f1fd1b" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:29:21.199124 systemd[1]: Started cri-containerd-272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e.scope - libcontainer container 272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e. Oct 27 08:29:21.213916 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:29:21.244749 containerd[1635]: time="2025-10-27T08:29:21.244700308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4sbmc,Uid:fe014e01-beff-4bf8-ab4a-6b42d97b928c,Namespace:kube-system,Attempt:0,} returns sandbox id \"272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e\"" Oct 27 08:29:21.245596 kubelet[2801]: E1027 08:29:21.245561 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:21.249587 containerd[1635]: time="2025-10-27T08:29:21.249549948Z" level=info msg="CreateContainer within sandbox \"272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:29:21.258867 containerd[1635]: time="2025-10-27T08:29:21.258814540Z" level=info msg="Container 2f6ebc1fd55ba71596c3cb98bc0cb04c90ae334d88e0443f39640257b00b3fcd: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:29:21.264562 containerd[1635]: time="2025-10-27T08:29:21.264527995Z" level=info msg="CreateContainer within sandbox \"272c99f49ebd7f47e360ed3a0b5b1ec6c8746ca6da58fac24f5fa60b2d7d908e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f6ebc1fd55ba71596c3cb98bc0cb04c90ae334d88e0443f39640257b00b3fcd\"" Oct 27 08:29:21.265104 containerd[1635]: time="2025-10-27T08:29:21.265073761Z" level=info msg="StartContainer for \"2f6ebc1fd55ba71596c3cb98bc0cb04c90ae334d88e0443f39640257b00b3fcd\"" Oct 27 08:29:21.265860 containerd[1635]: time="2025-10-27T08:29:21.265826006Z" level=info msg="connecting to shim 2f6ebc1fd55ba71596c3cb98bc0cb04c90ae334d88e0443f39640257b00b3fcd" address="unix:///run/containerd/s/7a260f1122dba71d4a61f0cfb22d858eb8e2eeede40794ef74e6533ee5f1fd1b" protocol=ttrpc version=3 Oct 27 08:29:21.299094 systemd[1]: Started cri-containerd-2f6ebc1fd55ba71596c3cb98bc0cb04c90ae334d88e0443f39640257b00b3fcd.scope - libcontainer container 2f6ebc1fd55ba71596c3cb98bc0cb04c90ae334d88e0443f39640257b00b3fcd. Oct 27 08:29:21.328655 containerd[1635]: time="2025-10-27T08:29:21.328605089Z" level=info msg="StartContainer for \"2f6ebc1fd55ba71596c3cb98bc0cb04c90ae334d88e0443f39640257b00b3fcd\" returns successfully" Oct 27 08:29:21.686452 containerd[1635]: time="2025-10-27T08:29:21.686380085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:21.687702 containerd[1635]: time="2025-10-27T08:29:21.687637467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:29:21.687766 containerd[1635]: time="2025-10-27T08:29:21.687694208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:29:21.687945 kubelet[2801]: E1027 08:29:21.687901 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:29:21.688010 kubelet[2801]: E1027 08:29:21.687990 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:29:21.688308 containerd[1635]: time="2025-10-27T08:29:21.688277279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:29:21.688365 kubelet[2801]: E1027 08:29:21.688260 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm2tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vqr2m_calico-system(8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:21.689792 kubelet[2801]: E1027 08:29:21.689743 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqr2m" podUID="8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2" Oct 27 08:29:22.009380 containerd[1635]: time="2025-10-27T08:29:22.009335098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c64f56595-lp5c4,Uid:082222f6-7d6e-417e-ac7d-32f9df4dff89,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:29:22.091452 containerd[1635]: time="2025-10-27T08:29:22.091396816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:22.092897 containerd[1635]: time="2025-10-27T08:29:22.092764371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:29:22.092897 containerd[1635]: time="2025-10-27T08:29:22.092857936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:29:22.093124 kubelet[2801]: E1027 08:29:22.093072 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:29:22.093209 kubelet[2801]: E1027 08:29:22.093140 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:29:22.093340 kubelet[2801]: E1027 08:29:22.093286 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prh88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6585l_calico-system(c11176f9-c15d-4aff-9a2f-9db19f9df938): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:22.094544 kubelet[2801]: E1027 08:29:22.094486 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:29:22.113328 systemd-networkd[1513]: calib6c7f903b9e: Link UP Oct 27 08:29:22.113538 systemd-networkd[1513]: calib6c7f903b9e: Gained carrier Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.046 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0 calico-apiserver-c64f56595- calico-apiserver 082222f6-7d6e-417e-ac7d-32f9df4dff89 893 0 2025-10-27 08:28:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c64f56595 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c64f56595-lp5c4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6c7f903b9e [] [] }} ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lp5c4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lp5c4-" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.046 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lp5c4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.075 [INFO][4868] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" HandleID="k8s-pod-network.2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Workload="localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.076 [INFO][4868] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" HandleID="k8s-pod-network.2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Workload="localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ec00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c64f56595-lp5c4", "timestamp":"2025-10-27 08:29:22.075879614 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.076 [INFO][4868] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.076 [INFO][4868] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.076 [INFO][4868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.082 [INFO][4868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" host="localhost" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.086 [INFO][4868] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.090 [INFO][4868] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.091 [INFO][4868] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.094 [INFO][4868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.094 [INFO][4868] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" host="localhost" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.096 [INFO][4868] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665 Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.100 [INFO][4868] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" host="localhost" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.106 [INFO][4868] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" host="localhost" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.106 [INFO][4868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" host="localhost" Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.106 [INFO][4868] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:29:22.126358 containerd[1635]: 2025-10-27 08:29:22.106 [INFO][4868] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" HandleID="k8s-pod-network.2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Workload="localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0" Oct 27 08:29:22.127018 containerd[1635]: 2025-10-27 08:29:22.110 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lp5c4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0", GenerateName:"calico-apiserver-c64f56595-", Namespace:"calico-apiserver", SelfLink:"", UID:"082222f6-7d6e-417e-ac7d-32f9df4dff89", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c64f56595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c64f56595-lp5c4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6c7f903b9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:22.127018 containerd[1635]: 2025-10-27 08:29:22.110 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lp5c4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0" Oct 27 08:29:22.127018 containerd[1635]: 2025-10-27 08:29:22.110 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6c7f903b9e ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lp5c4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0" Oct 27 08:29:22.127018 containerd[1635]: 2025-10-27 08:29:22.113 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lp5c4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0" Oct 27 08:29:22.127018 containerd[1635]: 2025-10-27 08:29:22.115 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lp5c4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0", GenerateName:"calico-apiserver-c64f56595-", Namespace:"calico-apiserver", SelfLink:"", UID:"082222f6-7d6e-417e-ac7d-32f9df4dff89", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 28, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c64f56595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665", Pod:"calico-apiserver-c64f56595-lp5c4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6c7f903b9e", MAC:"8e:f8:18:f6:47:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:29:22.127018 containerd[1635]: 2025-10-27 08:29:22.123 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" Namespace="calico-apiserver" Pod="calico-apiserver-c64f56595-lp5c4" WorkloadEndpoint="localhost-k8s-calico--apiserver--c64f56595--lp5c4-eth0" Oct 27 08:29:22.148184 containerd[1635]: time="2025-10-27T08:29:22.148128443Z" level=info msg="connecting to shim 2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665" address="unix:///run/containerd/s/90d272432da4a3abfd1a6b25dba69fedb6698bab58d0ccd2430a309b6f049e87" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:29:22.164304 kubelet[2801]: E1027 08:29:22.164241 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:22.164670 kubelet[2801]: E1027 08:29:22.164412 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:22.165710 kubelet[2801]: E1027 08:29:22.165233 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqr2m" podUID="8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2" Oct 27 08:29:22.167400 kubelet[2801]: E1027 08:29:22.167367 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:29:22.170126 systemd[1]: Started cri-containerd-2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665.scope - libcontainer container 2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665. Oct 27 08:29:22.191791 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:29:22.209393 kubelet[2801]: I1027 08:29:22.208908 2801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4sbmc" podStartSLOduration=42.20888867 podStartE2EDuration="42.20888867s" podCreationTimestamp="2025-10-27 08:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:29:22.208379767 +0000 UTC m=+46.305258851" watchObservedRunningTime="2025-10-27 08:29:22.20888867 +0000 UTC m=+46.305767754" Oct 27 08:29:22.239510 containerd[1635]: time="2025-10-27T08:29:22.239445221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c64f56595-lp5c4,Uid:082222f6-7d6e-417e-ac7d-32f9df4dff89,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2ca556d04377670763abe43b30b534eefe1a21d5fc04f6b2cf855e6baee9a665\"" Oct 27 08:29:22.240799 containerd[1635]: time="2025-10-27T08:29:22.240757858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:29:22.586107 containerd[1635]: time="2025-10-27T08:29:22.586050974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:22.587490 containerd[1635]: time="2025-10-27T08:29:22.587432377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:29:22.587490 containerd[1635]: time="2025-10-27T08:29:22.587467075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:29:22.587694 kubelet[2801]: E1027 08:29:22.587647 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:29:22.587748 kubelet[2801]: E1027 08:29:22.587702 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:29:22.587960 kubelet[2801]: E1027 08:29:22.587903 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pf8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c64f56595-lp5c4_calico-apiserver(082222f6-7d6e-417e-ac7d-32f9df4dff89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:22.589459 kubelet[2801]: E1027 08:29:22.589410 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" podUID="082222f6-7d6e-417e-ac7d-32f9df4dff89" Oct 27 08:29:22.896127 systemd-networkd[1513]: cali6902f345245: Gained IPv6LL Oct 27 08:29:23.167730 kubelet[2801]: E1027 08:29:23.166883 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:23.167730 kubelet[2801]: E1027 08:29:23.166902 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:23.167730 kubelet[2801]: E1027 08:29:23.167203 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" podUID="082222f6-7d6e-417e-ac7d-32f9df4dff89" Oct 27 08:29:23.281245 systemd-networkd[1513]: calib6c7f903b9e: Gained IPv6LL Oct 27 08:29:24.168394 kubelet[2801]: E1027 08:29:24.168355 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:24.169668 kubelet[2801]: E1027 08:29:24.169599 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" podUID="082222f6-7d6e-417e-ac7d-32f9df4dff89" Oct 27 08:29:29.312107 systemd[1]: Started sshd@7-10.0.0.103:22-10.0.0.1:39662.service - OpenSSH per-connection server daemon (10.0.0.1:39662). Oct 27 08:29:29.400376 sshd[4954]: Accepted publickey for core from 10.0.0.1 port 39662 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:29:29.402726 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:29:29.408038 systemd-logind[1609]: New session 8 of user core. Oct 27 08:29:29.416239 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 08:29:29.560253 sshd[4958]: Connection closed by 10.0.0.1 port 39662 Oct 27 08:29:29.560562 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Oct 27 08:29:29.565805 systemd[1]: sshd@7-10.0.0.103:22-10.0.0.1:39662.service: Deactivated successfully. Oct 27 08:29:29.568659 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 08:29:29.569540 systemd-logind[1609]: Session 8 logged out. Waiting for processes to exit. Oct 27 08:29:29.570917 systemd-logind[1609]: Removed session 8. Oct 27 08:29:31.022087 containerd[1635]: time="2025-10-27T08:29:31.022020982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:29:31.472491 containerd[1635]: time="2025-10-27T08:29:31.472322492Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:31.473540 containerd[1635]: time="2025-10-27T08:29:31.473484479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:29:31.473630 containerd[1635]: time="2025-10-27T08:29:31.473579935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:29:31.473764 kubelet[2801]: E1027 08:29:31.473703 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:29:31.474166 kubelet[2801]: E1027 08:29:31.473762 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:29:31.474166 kubelet[2801]: E1027 08:29:31.473894 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:59d5126a211343c59ed5040c1e1811de,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp9kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd8dbc6d4-qftjw_calico-system(b8851923-61d5-4c1d-bf80-827889da605a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:31.476974 containerd[1635]: time="2025-10-27T08:29:31.476729773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:29:31.849523 containerd[1635]: time="2025-10-27T08:29:31.849451752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:31.850744 containerd[1635]: time="2025-10-27T08:29:31.850691932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:29:31.850744 containerd[1635]: time="2025-10-27T08:29:31.850730467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:29:31.851026 kubelet[2801]: E1027 08:29:31.850931 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:29:31.851115 kubelet[2801]: E1027 08:29:31.851022 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:29:31.851241 kubelet[2801]: E1027 08:29:31.851181 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mp9kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd8dbc6d4-qftjw_calico-system(b8851923-61d5-4c1d-bf80-827889da605a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:31.852420 kubelet[2801]: E1027 08:29:31.852354 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd8dbc6d4-qftjw" podUID="b8851923-61d5-4c1d-bf80-827889da605a" Oct 27 08:29:32.012458 containerd[1635]: time="2025-10-27T08:29:32.012404421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:29:32.405570 containerd[1635]: time="2025-10-27T08:29:32.405490429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:32.406737 containerd[1635]: time="2025-10-27T08:29:32.406694827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:29:32.406809 containerd[1635]: time="2025-10-27T08:29:32.406758120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:29:32.406973 kubelet[2801]: E1027 08:29:32.406906 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:29:32.407034 kubelet[2801]: E1027 08:29:32.406983 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:29:32.407190 kubelet[2801]: E1027 08:29:32.407134 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n9pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-9b5b6f478-9qcvc_calico-system(58e52a6b-4ff7-410b-ae51-43b90491a215): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:32.408340 kubelet[2801]: E1027 08:29:32.408293 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" podUID="58e52a6b-4ff7-410b-ae51-43b90491a215" Oct 27 08:29:34.010986 containerd[1635]: time="2025-10-27T08:29:34.010690780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:29:34.431501 containerd[1635]: time="2025-10-27T08:29:34.431332853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:34.446422 containerd[1635]: time="2025-10-27T08:29:34.446305772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:29:34.446600 containerd[1635]: time="2025-10-27T08:29:34.446343505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:29:34.446706 kubelet[2801]: E1027 08:29:34.446624 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:29:34.447122 kubelet[2801]: E1027 08:29:34.446707 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:29:34.447122 kubelet[2801]: E1027 08:29:34.446846 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prh88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6585l_calico-system(c11176f9-c15d-4aff-9a2f-9db19f9df938): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:34.449140 containerd[1635]: time="2025-10-27T08:29:34.449096242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:29:34.580590 systemd[1]: Started sshd@8-10.0.0.103:22-10.0.0.1:46440.service - OpenSSH per-connection server daemon (10.0.0.1:46440). Oct 27 08:29:34.641380 sshd[4975]: Accepted publickey for core from 10.0.0.1 port 46440 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:29:34.642926 sshd-session[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:29:34.647721 systemd-logind[1609]: New session 9 of user core. Oct 27 08:29:34.660165 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 08:29:34.824924 containerd[1635]: time="2025-10-27T08:29:34.824840820Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:34.993440 sshd[4978]: Connection closed by 10.0.0.1 port 46440 Oct 27 08:29:34.993997 containerd[1635]: time="2025-10-27T08:29:34.993463757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:29:34.993997 containerd[1635]: time="2025-10-27T08:29:34.993476853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:29:34.994101 kubelet[2801]: E1027 08:29:34.993743 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:29:34.994101 kubelet[2801]: E1027 08:29:34.993799 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:29:34.994046 sshd-session[4975]: pam_unix(sshd:session): session closed for user core Oct 27 08:29:34.995224 kubelet[2801]: E1027 08:29:34.994896 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prh88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6585l_calico-system(c11176f9-c15d-4aff-9a2f-9db19f9df938): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:34.996545 kubelet[2801]: E1027 08:29:34.996444 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:29:35.002418 systemd[1]: sshd@8-10.0.0.103:22-10.0.0.1:46440.service: Deactivated successfully. Oct 27 08:29:35.004465 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 08:29:35.005592 systemd-logind[1609]: Session 9 logged out. Waiting for processes to exit. Oct 27 08:29:35.007281 systemd-logind[1609]: Removed session 9. Oct 27 08:29:35.021333 containerd[1635]: time="2025-10-27T08:29:35.021098894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:29:35.418124 containerd[1635]: time="2025-10-27T08:29:35.418050652Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:35.419184 containerd[1635]: time="2025-10-27T08:29:35.419151092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:29:35.419308 containerd[1635]: time="2025-10-27T08:29:35.419244944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:29:35.419465 kubelet[2801]: E1027 08:29:35.419414 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:29:35.419529 kubelet[2801]: E1027 08:29:35.419471 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:29:35.419790 containerd[1635]: time="2025-10-27T08:29:35.419765767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:29:35.419888 kubelet[2801]: E1027 08:29:35.419825 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkzjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c64f56595-lxn6t_calico-apiserver(9f02b196-86ab-47f3-85f7-4a69adbdcd03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:35.421185 kubelet[2801]: E1027 08:29:35.421153 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" podUID="9f02b196-86ab-47f3-85f7-4a69adbdcd03" Oct 27 08:29:35.927421 containerd[1635]: time="2025-10-27T08:29:35.927331947Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:35.928787 containerd[1635]: time="2025-10-27T08:29:35.928687403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:29:35.928851 containerd[1635]: time="2025-10-27T08:29:35.928776857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:29:35.929176 kubelet[2801]: E1027 08:29:35.928984 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:29:35.929571 kubelet[2801]: E1027 08:29:35.929176 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:29:35.929571 kubelet[2801]: E1027 08:29:35.929325 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm2tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vqr2m_calico-system(8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:35.930809 kubelet[2801]: E1027 08:29:35.930757 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqr2m" podUID="8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2" Oct 27 08:29:37.009910 containerd[1635]: time="2025-10-27T08:29:37.009854372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:29:37.476353 containerd[1635]: time="2025-10-27T08:29:37.476201404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:37.639175 containerd[1635]: time="2025-10-27T08:29:37.639093590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:29:37.639175 containerd[1635]: time="2025-10-27T08:29:37.639136263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:29:37.662280 kubelet[2801]: E1027 08:29:37.662157 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:29:37.662280 kubelet[2801]: E1027 08:29:37.662247 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:29:37.663866 kubelet[2801]: E1027 08:29:37.662387 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pf8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c64f56595-lp5c4_calico-apiserver(082222f6-7d6e-417e-ac7d-32f9df4dff89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:37.664120 kubelet[2801]: E1027 08:29:37.664088 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" podUID="082222f6-7d6e-417e-ac7d-32f9df4dff89" Oct 27 08:29:40.010953 systemd[1]: Started sshd@9-10.0.0.103:22-10.0.0.1:46448.service - OpenSSH per-connection server daemon (10.0.0.1:46448). Oct 27 08:29:40.075903 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 46448 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:29:40.077555 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:29:40.082628 systemd-logind[1609]: New session 10 of user core. Oct 27 08:29:40.091076 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 08:29:40.212365 sshd[5003]: Connection closed by 10.0.0.1 port 46448 Oct 27 08:29:40.212701 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Oct 27 08:29:40.217754 systemd[1]: sshd@9-10.0.0.103:22-10.0.0.1:46448.service: Deactivated successfully. Oct 27 08:29:40.220015 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 08:29:40.220808 systemd-logind[1609]: Session 10 logged out. Waiting for processes to exit. Oct 27 08:29:40.222190 systemd-logind[1609]: Removed session 10. Oct 27 08:29:44.010225 kubelet[2801]: E1027 08:29:44.010166 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd8dbc6d4-qftjw" podUID="b8851923-61d5-4c1d-bf80-827889da605a" Oct 27 08:29:45.010719 kubelet[2801]: E1027 08:29:45.010180 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" podUID="58e52a6b-4ff7-410b-ae51-43b90491a215" Oct 27 08:29:45.238218 systemd[1]: Started sshd@10-10.0.0.103:22-10.0.0.1:56990.service - OpenSSH per-connection server daemon (10.0.0.1:56990). Oct 27 08:29:45.293229 sshd[5021]: Accepted publickey for core from 10.0.0.1 port 56990 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:29:45.294629 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:29:45.299807 systemd-logind[1609]: New session 11 of user core. Oct 27 08:29:45.309096 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 08:29:45.422638 sshd[5024]: Connection closed by 10.0.0.1 port 56990 Oct 27 08:29:45.423063 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Oct 27 08:29:45.433466 systemd[1]: sshd@10-10.0.0.103:22-10.0.0.1:56990.service: Deactivated successfully. Oct 27 08:29:45.435728 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 08:29:45.436592 systemd-logind[1609]: Session 11 logged out. Waiting for processes to exit. Oct 27 08:29:45.439974 systemd[1]: Started sshd@11-10.0.0.103:22-10.0.0.1:56996.service - OpenSSH per-connection server daemon (10.0.0.1:56996). Oct 27 08:29:45.440769 systemd-logind[1609]: Removed session 11. Oct 27 08:29:45.499051 sshd[5038]: Accepted publickey for core from 10.0.0.1 port 56996 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:29:45.500856 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:29:45.506267 systemd-logind[1609]: New session 12 of user core. Oct 27 08:29:45.513143 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 08:29:45.783427 sshd[5042]: Connection closed by 10.0.0.1 port 56996 Oct 27 08:29:45.783822 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Oct 27 08:29:45.794708 systemd[1]: sshd@11-10.0.0.103:22-10.0.0.1:56996.service: Deactivated successfully. Oct 27 08:29:45.797413 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 08:29:45.798356 systemd-logind[1609]: Session 12 logged out. Waiting for processes to exit. Oct 27 08:29:45.802323 systemd[1]: Started sshd@12-10.0.0.103:22-10.0.0.1:57008.service - OpenSSH per-connection server daemon (10.0.0.1:57008). Oct 27 08:29:45.803006 systemd-logind[1609]: Removed session 12. Oct 27 08:29:45.879599 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 57008 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:29:45.881450 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:29:45.886141 systemd-logind[1609]: New session 13 of user core. Oct 27 08:29:45.898234 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 08:29:46.035307 sshd[5056]: Connection closed by 10.0.0.1 port 57008 Oct 27 08:29:46.037290 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Oct 27 08:29:46.042573 systemd[1]: sshd@12-10.0.0.103:22-10.0.0.1:57008.service: Deactivated successfully. Oct 27 08:29:46.045763 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 08:29:46.047364 systemd-logind[1609]: Session 13 logged out. Waiting for processes to exit. Oct 27 08:29:46.049843 systemd-logind[1609]: Removed session 13. Oct 27 08:29:48.010730 kubelet[2801]: E1027 08:29:48.010183 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:48.012112 kubelet[2801]: E1027 08:29:48.011612 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" podUID="9f02b196-86ab-47f3-85f7-4a69adbdcd03" Oct 27 08:29:48.013324 kubelet[2801]: E1027 08:29:48.013248 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:29:49.010194 kubelet[2801]: E1027 08:29:49.010136 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqr2m" podUID="8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2" Oct 27 08:29:49.242255 containerd[1635]: time="2025-10-27T08:29:49.242189385Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0\" id:\"c4fea905b122a9051da53817ad9ce0dd2af6864d7ac79e98d94431dc41393135\" pid:5082 exited_at:{seconds:1761553789 nanos:241727795}" Oct 27 08:29:49.245474 kubelet[2801]: E1027 08:29:49.245420 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:29:51.010234 kubelet[2801]: E1027 08:29:51.010152 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" podUID="082222f6-7d6e-417e-ac7d-32f9df4dff89" Oct 27 08:29:51.051811 systemd[1]: Started sshd@13-10.0.0.103:22-10.0.0.1:57010.service - OpenSSH per-connection server daemon (10.0.0.1:57010). Oct 27 08:29:51.121842 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 57010 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:29:51.124058 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:29:51.128835 systemd-logind[1609]: New session 14 of user core. Oct 27 08:29:51.139113 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 08:29:51.268381 sshd[5104]: Connection closed by 10.0.0.1 port 57010 Oct 27 08:29:51.268574 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Oct 27 08:29:51.272406 systemd[1]: sshd@13-10.0.0.103:22-10.0.0.1:57010.service: Deactivated successfully. Oct 27 08:29:51.274913 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 08:29:51.277023 systemd-logind[1609]: Session 14 logged out. Waiting for processes to exit. Oct 27 08:29:51.278455 systemd-logind[1609]: Removed session 14. Oct 27 08:29:56.286531 systemd[1]: Started sshd@14-10.0.0.103:22-10.0.0.1:35690.service - OpenSSH per-connection server daemon (10.0.0.1:35690). Oct 27 08:29:56.340742 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 35690 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:29:56.342521 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:29:56.347205 systemd-logind[1609]: New session 15 of user core. Oct 27 08:29:56.360100 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 08:29:56.476577 sshd[5122]: Connection closed by 10.0.0.1 port 35690 Oct 27 08:29:56.476873 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Oct 27 08:29:56.482586 systemd[1]: sshd@14-10.0.0.103:22-10.0.0.1:35690.service: Deactivated successfully. Oct 27 08:29:56.484657 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 08:29:56.485752 systemd-logind[1609]: Session 15 logged out. Waiting for processes to exit. Oct 27 08:29:56.487298 systemd-logind[1609]: Removed session 15. Oct 27 08:29:57.010033 containerd[1635]: time="2025-10-27T08:29:57.009929685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:29:57.386414 containerd[1635]: time="2025-10-27T08:29:57.386241437Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:57.390086 containerd[1635]: time="2025-10-27T08:29:57.390012401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:29:57.390144 containerd[1635]: time="2025-10-27T08:29:57.390084229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:29:57.390371 kubelet[2801]: E1027 08:29:57.390309 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:29:57.390756 kubelet[2801]: E1027 08:29:57.390378 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:29:57.390756 kubelet[2801]: E1027 08:29:57.390498 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:59d5126a211343c59ed5040c1e1811de,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mp9kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd8dbc6d4-qftjw_calico-system(b8851923-61d5-4c1d-bf80-827889da605a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:57.392425 containerd[1635]: time="2025-10-27T08:29:57.392386225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:29:57.778228 containerd[1635]: time="2025-10-27T08:29:57.778162654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:57.779528 containerd[1635]: time="2025-10-27T08:29:57.779469391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:29:57.779702 containerd[1635]: time="2025-10-27T08:29:57.779599461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:29:57.779808 kubelet[2801]: E1027 08:29:57.779747 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:29:57.779871 kubelet[2801]: E1027 08:29:57.779810 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:29:57.780048 kubelet[2801]: E1027 08:29:57.779974 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mp9kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd8dbc6d4-qftjw_calico-system(b8851923-61d5-4c1d-bf80-827889da605a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:57.781229 kubelet[2801]: E1027 08:29:57.781192 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd8dbc6d4-qftjw" podUID="b8851923-61d5-4c1d-bf80-827889da605a" Oct 27 08:29:59.010278 containerd[1635]: time="2025-10-27T08:29:59.010207256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:29:59.487661 containerd[1635]: time="2025-10-27T08:29:59.487512629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:29:59.620325 containerd[1635]: time="2025-10-27T08:29:59.620250998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:29:59.620325 containerd[1635]: time="2025-10-27T08:29:59.620284321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:29:59.620554 kubelet[2801]: E1027 08:29:59.620504 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:29:59.620981 kubelet[2801]: E1027 08:29:59.620567 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:29:59.620981 kubelet[2801]: E1027 08:29:59.620705 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n9pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-9b5b6f478-9qcvc_calico-system(58e52a6b-4ff7-410b-ae51-43b90491a215): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:29:59.621913 kubelet[2801]: E1027 08:29:59.621866 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" podUID="58e52a6b-4ff7-410b-ae51-43b90491a215" Oct 27 08:30:00.011258 containerd[1635]: time="2025-10-27T08:30:00.011178193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:30:00.412708 containerd[1635]: time="2025-10-27T08:30:00.412537733Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:30:00.437470 containerd[1635]: time="2025-10-27T08:30:00.437419961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:30:00.437559 containerd[1635]: time="2025-10-27T08:30:00.437456190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:30:00.437769 kubelet[2801]: E1027 08:30:00.437708 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:30:00.437832 kubelet[2801]: E1027 08:30:00.437776 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:30:00.438048 kubelet[2801]: E1027 08:30:00.437979 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm2tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vqr2m_calico-system(8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:30:00.439203 kubelet[2801]: E1027 08:30:00.439159 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqr2m" podUID="8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2" Oct 27 08:30:01.009888 containerd[1635]: time="2025-10-27T08:30:01.009842898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:30:01.399727 containerd[1635]: time="2025-10-27T08:30:01.399589161Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:30:01.449587 containerd[1635]: time="2025-10-27T08:30:01.449524980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:30:01.449713 containerd[1635]: time="2025-10-27T08:30:01.449600074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:30:01.449796 kubelet[2801]: E1027 08:30:01.449752 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:30:01.450123 kubelet[2801]: E1027 08:30:01.449807 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:30:01.450123 kubelet[2801]: E1027 08:30:01.450081 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkzjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c64f56595-lxn6t_calico-apiserver(9f02b196-86ab-47f3-85f7-4a69adbdcd03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:30:01.450248 containerd[1635]: time="2025-10-27T08:30:01.450161529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:30:01.451722 kubelet[2801]: E1027 08:30:01.451656 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" podUID="9f02b196-86ab-47f3-85f7-4a69adbdcd03" Oct 27 08:30:01.490220 systemd[1]: Started sshd@15-10.0.0.103:22-10.0.0.1:35694.service - OpenSSH per-connection server daemon (10.0.0.1:35694). Oct 27 08:30:01.578924 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 35694 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:01.580723 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:01.587023 systemd-logind[1609]: New session 16 of user core. Oct 27 08:30:01.592202 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 08:30:01.718859 sshd[5145]: Connection closed by 10.0.0.1 port 35694 Oct 27 08:30:01.719201 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Oct 27 08:30:01.726042 systemd[1]: sshd@15-10.0.0.103:22-10.0.0.1:35694.service: Deactivated successfully. Oct 27 08:30:01.728292 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 08:30:01.729334 systemd-logind[1609]: Session 16 logged out. Waiting for processes to exit. Oct 27 08:30:01.730736 systemd-logind[1609]: Removed session 16. Oct 27 08:30:01.882018 containerd[1635]: time="2025-10-27T08:30:01.881923629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:30:02.111638 containerd[1635]: time="2025-10-27T08:30:02.111586975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:30:02.111793 containerd[1635]: time="2025-10-27T08:30:02.111655927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:30:02.111825 kubelet[2801]: E1027 08:30:02.111783 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:30:02.112023 kubelet[2801]: E1027 08:30:02.111830 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:30:02.112023 kubelet[2801]: E1027 08:30:02.111993 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prh88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6585l_calico-system(c11176f9-c15d-4aff-9a2f-9db19f9df938): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:30:02.114721 containerd[1635]: time="2025-10-27T08:30:02.114674066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:30:02.647180 containerd[1635]: time="2025-10-27T08:30:02.647121025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:30:02.648472 containerd[1635]: time="2025-10-27T08:30:02.648436524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:30:02.648532 containerd[1635]: time="2025-10-27T08:30:02.648513161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:30:02.648693 kubelet[2801]: E1027 08:30:02.648650 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:30:02.648980 kubelet[2801]: E1027 08:30:02.648706 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:30:02.648980 kubelet[2801]: E1027 08:30:02.648842 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prh88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6585l_calico-system(c11176f9-c15d-4aff-9a2f-9db19f9df938): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:30:02.651075 kubelet[2801]: E1027 08:30:02.651012 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:30:04.010472 containerd[1635]: time="2025-10-27T08:30:04.010190641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:30:04.766470 containerd[1635]: time="2025-10-27T08:30:04.766407699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:30:04.767619 containerd[1635]: time="2025-10-27T08:30:04.767582588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:30:04.767691 containerd[1635]: time="2025-10-27T08:30:04.767663382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:30:04.767875 kubelet[2801]: E1027 08:30:04.767822 2801 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:30:04.768238 kubelet[2801]: E1027 08:30:04.767882 2801 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:30:04.768238 kubelet[2801]: E1027 08:30:04.768080 2801 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pf8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c64f56595-lp5c4_calico-apiserver(082222f6-7d6e-417e-ac7d-32f9df4dff89): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:30:04.769296 kubelet[2801]: E1027 08:30:04.769269 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" podUID="082222f6-7d6e-417e-ac7d-32f9df4dff89" Oct 27 08:30:06.009326 kubelet[2801]: E1027 08:30:06.009275 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:30:06.732100 systemd[1]: Started sshd@16-10.0.0.103:22-10.0.0.1:58892.service - OpenSSH per-connection server daemon (10.0.0.1:58892). Oct 27 08:30:06.794125 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 58892 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:06.795585 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:06.799989 systemd-logind[1609]: New session 17 of user core. Oct 27 08:30:06.811099 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 08:30:06.937113 sshd[5164]: Connection closed by 10.0.0.1 port 58892 Oct 27 08:30:06.937606 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Oct 27 08:30:06.946915 systemd[1]: sshd@16-10.0.0.103:22-10.0.0.1:58892.service: Deactivated successfully. Oct 27 08:30:06.949235 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 08:30:06.950235 systemd-logind[1609]: Session 17 logged out. Waiting for processes to exit. Oct 27 08:30:06.953556 systemd[1]: Started sshd@17-10.0.0.103:22-10.0.0.1:58900.service - OpenSSH per-connection server daemon (10.0.0.1:58900). Oct 27 08:30:06.954478 systemd-logind[1609]: Removed session 17. Oct 27 08:30:07.024650 sshd[5177]: Accepted publickey for core from 10.0.0.1 port 58900 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:07.026176 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:07.030907 systemd-logind[1609]: New session 18 of user core. Oct 27 08:30:07.038114 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 08:30:07.296331 sshd[5180]: Connection closed by 10.0.0.1 port 58900 Oct 27 08:30:07.296852 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Oct 27 08:30:07.309174 systemd[1]: sshd@17-10.0.0.103:22-10.0.0.1:58900.service: Deactivated successfully. Oct 27 08:30:07.311281 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 08:30:07.312235 systemd-logind[1609]: Session 18 logged out. Waiting for processes to exit. Oct 27 08:30:07.315246 systemd[1]: Started sshd@18-10.0.0.103:22-10.0.0.1:58906.service - OpenSSH per-connection server daemon (10.0.0.1:58906). Oct 27 08:30:07.316236 systemd-logind[1609]: Removed session 18. Oct 27 08:30:07.368741 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 58906 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:07.370416 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:07.375081 systemd-logind[1609]: New session 19 of user core. Oct 27 08:30:07.394057 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 08:30:07.885826 sshd[5194]: Connection closed by 10.0.0.1 port 58906 Oct 27 08:30:07.886345 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Oct 27 08:30:07.898105 systemd[1]: sshd@18-10.0.0.103:22-10.0.0.1:58906.service: Deactivated successfully. Oct 27 08:30:07.904493 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 08:30:07.905563 systemd-logind[1609]: Session 19 logged out. Waiting for processes to exit. Oct 27 08:30:07.909507 systemd[1]: Started sshd@19-10.0.0.103:22-10.0.0.1:58910.service - OpenSSH per-connection server daemon (10.0.0.1:58910). Oct 27 08:30:07.913066 systemd-logind[1609]: Removed session 19. Oct 27 08:30:07.958665 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 58910 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:07.960350 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:07.964862 systemd-logind[1609]: New session 20 of user core. Oct 27 08:30:07.973087 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 08:30:08.008613 kubelet[2801]: E1027 08:30:08.008514 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:30:08.191027 sshd[5216]: Connection closed by 10.0.0.1 port 58910 Oct 27 08:30:08.192575 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Oct 27 08:30:08.203154 systemd[1]: sshd@19-10.0.0.103:22-10.0.0.1:58910.service: Deactivated successfully. Oct 27 08:30:08.205566 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 08:30:08.207034 systemd-logind[1609]: Session 20 logged out. Waiting for processes to exit. Oct 27 08:30:08.209259 systemd[1]: Started sshd@20-10.0.0.103:22-10.0.0.1:58920.service - OpenSSH per-connection server daemon (10.0.0.1:58920). Oct 27 08:30:08.210101 systemd-logind[1609]: Removed session 20. Oct 27 08:30:08.269069 sshd[5228]: Accepted publickey for core from 10.0.0.1 port 58920 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:08.271757 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:08.278355 systemd-logind[1609]: New session 21 of user core. Oct 27 08:30:08.290217 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 27 08:30:08.573854 sshd[5231]: Connection closed by 10.0.0.1 port 58920 Oct 27 08:30:08.574229 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Oct 27 08:30:08.579583 systemd[1]: sshd@20-10.0.0.103:22-10.0.0.1:58920.service: Deactivated successfully. Oct 27 08:30:08.581574 systemd[1]: session-21.scope: Deactivated successfully. Oct 27 08:30:08.582475 systemd-logind[1609]: Session 21 logged out. Waiting for processes to exit. Oct 27 08:30:08.583707 systemd-logind[1609]: Removed session 21. Oct 27 08:30:11.010343 kubelet[2801]: E1027 08:30:11.010257 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd8dbc6d4-qftjw" podUID="b8851923-61d5-4c1d-bf80-827889da605a" Oct 27 08:30:13.009423 kubelet[2801]: E1027 08:30:13.009300 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:30:13.010202 kubelet[2801]: E1027 08:30:13.010146 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vqr2m" podUID="8a6b3613-c6a1-49d5-80fe-fe16a86ca9d2" Oct 27 08:30:13.588163 systemd[1]: Started sshd@21-10.0.0.103:22-10.0.0.1:57052.service - OpenSSH per-connection server daemon (10.0.0.1:57052). Oct 27 08:30:13.647653 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 57052 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:13.649227 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:13.653499 systemd-logind[1609]: New session 22 of user core. Oct 27 08:30:13.663062 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 27 08:30:13.776574 sshd[5251]: Connection closed by 10.0.0.1 port 57052 Oct 27 08:30:13.776988 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Oct 27 08:30:13.782294 systemd[1]: sshd@21-10.0.0.103:22-10.0.0.1:57052.service: Deactivated successfully. Oct 27 08:30:13.784508 systemd[1]: session-22.scope: Deactivated successfully. Oct 27 08:30:13.785507 systemd-logind[1609]: Session 22 logged out. Waiting for processes to exit. Oct 27 08:30:13.787560 systemd-logind[1609]: Removed session 22. Oct 27 08:30:14.010298 kubelet[2801]: E1027 08:30:14.009899 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lxn6t" podUID="9f02b196-86ab-47f3-85f7-4a69adbdcd03" Oct 27 08:30:14.010298 kubelet[2801]: E1027 08:30:14.010238 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" podUID="58e52a6b-4ff7-410b-ae51-43b90491a215" Oct 27 08:30:17.009368 kubelet[2801]: E1027 08:30:17.009262 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6585l" podUID="c11176f9-c15d-4aff-9a2f-9db19f9df938" Oct 27 08:30:18.790395 systemd[1]: Started sshd@22-10.0.0.103:22-10.0.0.1:57064.service - OpenSSH per-connection server daemon (10.0.0.1:57064). Oct 27 08:30:18.845397 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 57064 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:18.847144 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:18.851433 systemd-logind[1609]: New session 23 of user core. Oct 27 08:30:18.861086 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 27 08:30:18.965724 sshd[5268]: Connection closed by 10.0.0.1 port 57064 Oct 27 08:30:18.966067 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Oct 27 08:30:18.970577 systemd[1]: sshd@22-10.0.0.103:22-10.0.0.1:57064.service: Deactivated successfully. Oct 27 08:30:18.972900 systemd[1]: session-23.scope: Deactivated successfully. Oct 27 08:30:18.973959 systemd-logind[1609]: Session 23 logged out. Waiting for processes to exit. Oct 27 08:30:18.975473 systemd-logind[1609]: Removed session 23. Oct 27 08:30:19.009306 kubelet[2801]: E1027 08:30:19.009258 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c64f56595-lp5c4" podUID="082222f6-7d6e-417e-ac7d-32f9df4dff89" Oct 27 08:30:19.223670 containerd[1635]: time="2025-10-27T08:30:19.223531400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb760f021b8f77d4360969eac8505b2f367d9f8d4f56571b03f7ffaaa3330ab0\" id:\"36814432bd8bec639cf154e26f86517b6e27fda79dc7cf662d7b04d808094dcf\" pid:5292 exited_at:{seconds:1761553819 nanos:223165673}" Oct 27 08:30:23.980418 systemd[1]: Started sshd@23-10.0.0.103:22-10.0.0.1:44486.service - OpenSSH per-connection server daemon (10.0.0.1:44486). Oct 27 08:30:24.011270 kubelet[2801]: E1027 08:30:24.011191 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd8dbc6d4-qftjw" podUID="b8851923-61d5-4c1d-bf80-827889da605a" Oct 27 08:30:24.058611 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 44486 ssh2: RSA SHA256:qPirkUcjN75oY8dUHO+4QhJKykg4rAWrvzikFQdbBAc Oct 27 08:30:24.060732 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:30:24.066542 systemd-logind[1609]: New session 24 of user core. Oct 27 08:30:24.078276 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 27 08:30:24.250564 sshd[5310]: Connection closed by 10.0.0.1 port 44486 Oct 27 08:30:24.250930 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Oct 27 08:30:24.259690 systemd[1]: sshd@23-10.0.0.103:22-10.0.0.1:44486.service: Deactivated successfully. Oct 27 08:30:24.261860 systemd[1]: session-24.scope: Deactivated successfully. Oct 27 08:30:24.262859 systemd-logind[1609]: Session 24 logged out. Waiting for processes to exit. Oct 27 08:30:24.264836 systemd-logind[1609]: Removed session 24. Oct 27 08:30:25.009241 kubelet[2801]: E1027 08:30:25.008921 2801 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:30:25.009960 kubelet[2801]: E1027 08:30:25.009863 2801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9b5b6f478-9qcvc" podUID="58e52a6b-4ff7-410b-ae51-43b90491a215"