Oct 29 00:43:26.579101 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 28 22:31:02 -00 2025 Oct 29 00:43:26.579129 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:43:26.579138 kernel: BIOS-provided physical RAM map: Oct 29 00:43:26.579145 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 29 00:43:26.579152 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 29 00:43:26.579161 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Oct 29 00:43:26.579169 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 29 00:43:26.579176 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Oct 29 00:43:26.579187 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 29 00:43:26.579194 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 29 00:43:26.579201 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 29 00:43:26.579208 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 29 00:43:26.579215 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 29 00:43:26.579225 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 29 00:43:26.579233 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 29 00:43:26.579241 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 29 00:43:26.579251 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 00:43:26.579261 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 00:43:26.579268 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 00:43:26.579275 kernel: NX (Execute Disable) protection: active Oct 29 00:43:26.579283 kernel: APIC: Static calls initialized Oct 29 00:43:26.579290 kernel: e820: update [mem 0x9a13d018-0x9a146c57] usable ==> usable Oct 29 00:43:26.579298 kernel: e820: update [mem 0x9a100018-0x9a13ce57] usable ==> usable Oct 29 00:43:26.579305 kernel: extended physical RAM map: Oct 29 00:43:26.579313 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 29 00:43:26.579320 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 29 00:43:26.579327 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Oct 29 00:43:26.579335 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 29 00:43:26.579344 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a100017] usable Oct 29 00:43:26.579352 kernel: reserve setup_data: [mem 0x000000009a100018-0x000000009a13ce57] usable Oct 29 00:43:26.579359 kernel: reserve setup_data: [mem 0x000000009a13ce58-0x000000009a13d017] usable Oct 29 00:43:26.579366 kernel: reserve setup_data: [mem 0x000000009a13d018-0x000000009a146c57] usable Oct 29 00:43:26.579374 kernel: reserve setup_data: [mem 0x000000009a146c58-0x000000009b8ecfff] usable Oct 29 00:43:26.579381 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 29 00:43:26.579389 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 29 00:43:26.579396 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 29 00:43:26.579403 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 29 00:43:26.579411 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 29 00:43:26.579420 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 29 00:43:26.579428 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 29 00:43:26.579439 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 29 00:43:26.579447 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 29 00:43:26.579454 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 00:43:26.579465 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 29 00:43:26.579472 kernel: efi: EFI v2.7 by EDK II Oct 29 00:43:26.579480 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Oct 29 00:43:26.579488 kernel: random: crng init done Oct 29 00:43:26.579496 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Oct 29 00:43:26.579504 kernel: secureboot: Secure boot enabled Oct 29 00:43:26.579519 kernel: SMBIOS 2.8 present. Oct 29 00:43:26.579527 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 29 00:43:26.579535 kernel: DMI: Memory slots populated: 1/1 Oct 29 00:43:26.579545 kernel: Hypervisor detected: KVM Oct 29 00:43:26.579553 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 29 00:43:26.579561 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 29 00:43:26.579569 kernel: kvm-clock: using sched offset of 6137334701 cycles Oct 29 00:43:26.579577 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 29 00:43:26.579585 kernel: tsc: Detected 2794.748 MHz processor Oct 29 00:43:26.579594 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 00:43:26.579602 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 00:43:26.579610 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 29 00:43:26.579623 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 29 00:43:26.579633 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 00:43:26.579644 kernel: Using GB pages for direct mapping Oct 29 00:43:26.579652 kernel: ACPI: Early table checksum verification disabled Oct 29 00:43:26.579660 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Oct 29 00:43:26.579668 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 29 00:43:26.579676 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:43:26.579687 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:43:26.579695 kernel: ACPI: FACS 0x000000009BBDD000 000040 Oct 29 00:43:26.579703 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:43:26.579711 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:43:26.579719 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:43:26.579728 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:43:26.579736 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 29 00:43:26.579746 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Oct 29 00:43:26.579754 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Oct 29 00:43:26.579762 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Oct 29 00:43:26.579770 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Oct 29 00:43:26.579778 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Oct 29 00:43:26.579786 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Oct 29 00:43:26.579794 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Oct 29 00:43:26.579802 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Oct 29 00:43:26.579812 kernel: No NUMA configuration found Oct 29 00:43:26.579820 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Oct 29 00:43:26.579829 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Oct 29 00:43:26.579837 kernel: Zone ranges: Oct 29 00:43:26.579845 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 00:43:26.579853 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Oct 29 00:43:26.579861 kernel: Normal empty Oct 29 00:43:26.579885 kernel: Device empty Oct 29 00:43:26.579893 kernel: Movable zone start for each node Oct 29 00:43:26.579901 kernel: Early memory node ranges Oct 29 00:43:26.579909 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Oct 29 00:43:26.579917 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Oct 29 00:43:26.579925 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Oct 29 00:43:26.579933 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Oct 29 00:43:26.579941 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Oct 29 00:43:26.579952 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Oct 29 00:43:26.579960 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 00:43:26.579968 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Oct 29 00:43:26.579976 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 29 00:43:26.579984 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 29 00:43:26.579992 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 29 00:43:26.580000 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Oct 29 00:43:26.580010 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 29 00:43:26.580019 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 29 00:43:26.580027 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 29 00:43:26.580036 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 29 00:43:26.580048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 29 00:43:26.580056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 29 00:43:26.580064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 29 00:43:26.580075 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 29 00:43:26.580083 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 00:43:26.580091 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 29 00:43:26.580099 kernel: TSC deadline timer available Oct 29 00:43:26.580107 kernel: CPU topo: Max. logical packages: 1 Oct 29 00:43:26.580115 kernel: CPU topo: Max. logical dies: 1 Oct 29 00:43:26.580132 kernel: CPU topo: Max. dies per package: 1 Oct 29 00:43:26.580141 kernel: CPU topo: Max. threads per core: 1 Oct 29 00:43:26.580149 kernel: CPU topo: Num. cores per package: 4 Oct 29 00:43:26.580159 kernel: CPU topo: Num. threads per package: 4 Oct 29 00:43:26.580170 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 29 00:43:26.580178 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 29 00:43:26.580187 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 29 00:43:26.580195 kernel: kvm-guest: setup PV sched yield Oct 29 00:43:26.580205 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 29 00:43:26.580214 kernel: Booting paravirtualized kernel on KVM Oct 29 00:43:26.580222 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 00:43:26.580231 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 29 00:43:26.580239 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 29 00:43:26.580247 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 29 00:43:26.580255 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 29 00:43:26.580265 kernel: kvm-guest: PV spinlocks enabled Oct 29 00:43:26.580274 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 29 00:43:26.580283 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:43:26.580292 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 00:43:26.580301 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 00:43:26.580309 kernel: Fallback order for Node 0: 0 Oct 29 00:43:26.580320 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Oct 29 00:43:26.580328 kernel: Policy zone: DMA32 Oct 29 00:43:26.580336 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 00:43:26.580345 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 00:43:26.580353 kernel: ftrace: allocating 40092 entries in 157 pages Oct 29 00:43:26.580361 kernel: ftrace: allocated 157 pages with 5 groups Oct 29 00:43:26.580370 kernel: Dynamic Preempt: voluntary Oct 29 00:43:26.580380 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 00:43:26.580389 kernel: rcu: RCU event tracing is enabled. Oct 29 00:43:26.580397 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 00:43:26.580406 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 00:43:26.580414 kernel: Rude variant of Tasks RCU enabled. Oct 29 00:43:26.580423 kernel: Tracing variant of Tasks RCU enabled. Oct 29 00:43:26.580431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 00:43:26.580439 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 00:43:26.580450 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:43:26.580459 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:43:26.580470 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 00:43:26.580478 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 29 00:43:26.580487 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 00:43:26.580495 kernel: Console: colour dummy device 80x25 Oct 29 00:43:26.580506 kernel: printk: legacy console [ttyS0] enabled Oct 29 00:43:26.580521 kernel: ACPI: Core revision 20240827 Oct 29 00:43:26.580529 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 29 00:43:26.580538 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 00:43:26.580546 kernel: x2apic enabled Oct 29 00:43:26.580555 kernel: APIC: Switched APIC routing to: physical x2apic Oct 29 00:43:26.580563 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 29 00:43:26.580572 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 29 00:43:26.580582 kernel: kvm-guest: setup PV IPIs Oct 29 00:43:26.580590 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 29 00:43:26.580599 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 00:43:26.580608 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 29 00:43:26.580616 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 29 00:43:26.580624 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 29 00:43:26.580633 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 29 00:43:26.580643 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 00:43:26.580654 kernel: Spectre V2 : Mitigation: Retpolines Oct 29 00:43:26.580662 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 29 00:43:26.580670 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 29 00:43:26.580679 kernel: active return thunk: retbleed_return_thunk Oct 29 00:43:26.580687 kernel: RETBleed: Mitigation: untrained return thunk Oct 29 00:43:26.580695 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 00:43:26.580706 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 29 00:43:26.580715 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 29 00:43:26.580724 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 29 00:43:26.580732 kernel: active return thunk: srso_return_thunk Oct 29 00:43:26.580741 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 29 00:43:26.580749 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 00:43:26.580759 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 00:43:26.580768 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 00:43:26.580776 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 00:43:26.580784 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 29 00:43:26.580793 kernel: Freeing SMP alternatives memory: 32K Oct 29 00:43:26.580801 kernel: pid_max: default: 32768 minimum: 301 Oct 29 00:43:26.580809 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 00:43:26.580820 kernel: landlock: Up and running. Oct 29 00:43:26.580828 kernel: SELinux: Initializing. Oct 29 00:43:26.580836 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:43:26.580845 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:43:26.580853 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 29 00:43:26.580861 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 29 00:43:26.580884 kernel: ... version: 0 Oct 29 00:43:26.580897 kernel: ... bit width: 48 Oct 29 00:43:26.580906 kernel: ... generic registers: 6 Oct 29 00:43:26.580914 kernel: ... value mask: 0000ffffffffffff Oct 29 00:43:26.580922 kernel: ... max period: 00007fffffffffff Oct 29 00:43:26.580931 kernel: ... fixed-purpose events: 0 Oct 29 00:43:26.580939 kernel: ... event mask: 000000000000003f Oct 29 00:43:26.580947 kernel: signal: max sigframe size: 1776 Oct 29 00:43:26.580955 kernel: rcu: Hierarchical SRCU implementation. Oct 29 00:43:26.580966 kernel: rcu: Max phase no-delay instances is 400. Oct 29 00:43:26.580974 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 00:43:26.580983 kernel: smp: Bringing up secondary CPUs ... Oct 29 00:43:26.580991 kernel: smpboot: x86: Booting SMP configuration: Oct 29 00:43:26.580999 kernel: .... node #0, CPUs: #1 #2 #3 Oct 29 00:43:26.581007 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 00:43:26.581015 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 29 00:43:26.581027 kernel: Memory: 2431744K/2552216K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114536K reserved, 0K cma-reserved) Oct 29 00:43:26.581035 kernel: devtmpfs: initialized Oct 29 00:43:26.581043 kernel: x86/mm: Memory block size: 128MB Oct 29 00:43:26.581051 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Oct 29 00:43:26.581060 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Oct 29 00:43:26.581068 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 00:43:26.581077 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 00:43:26.581087 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 00:43:26.581095 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 00:43:26.581104 kernel: audit: initializing netlink subsys (disabled) Oct 29 00:43:26.581112 kernel: audit: type=2000 audit(1761698604.268:1): state=initialized audit_enabled=0 res=1 Oct 29 00:43:26.581120 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 00:43:26.581129 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 00:43:26.581139 kernel: cpuidle: using governor menu Oct 29 00:43:26.581169 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 00:43:26.581178 kernel: dca service started, version 1.12.1 Oct 29 00:43:26.581189 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 29 00:43:26.581200 kernel: PCI: Using configuration type 1 for base access Oct 29 00:43:26.581209 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 00:43:26.581217 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 00:43:26.581225 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 00:43:26.581236 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 00:43:26.581245 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 00:43:26.581253 kernel: ACPI: Added _OSI(Module Device) Oct 29 00:43:26.581261 kernel: ACPI: Added _OSI(Processor Device) Oct 29 00:43:26.581270 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 00:43:26.581278 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 00:43:26.581286 kernel: ACPI: Interpreter enabled Oct 29 00:43:26.581297 kernel: ACPI: PM: (supports S0 S5) Oct 29 00:43:26.581305 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 00:43:26.581314 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 00:43:26.581322 kernel: PCI: Using E820 reservations for host bridge windows Oct 29 00:43:26.581331 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 29 00:43:26.581339 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 00:43:26.581605 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 00:43:26.581793 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 29 00:43:26.582000 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 29 00:43:26.582013 kernel: PCI host bridge to bus 0000:00 Oct 29 00:43:26.582191 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 00:43:26.582353 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 29 00:43:26.582539 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 00:43:26.582703 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 29 00:43:26.582863 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 29 00:43:26.583047 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 29 00:43:26.583207 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 00:43:26.587193 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 29 00:43:26.587401 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 29 00:43:26.587588 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 29 00:43:26.587773 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 29 00:43:26.587966 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 29 00:43:26.588163 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 00:43:26.588384 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 00:43:26.588580 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 29 00:43:26.588755 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 29 00:43:26.588945 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 29 00:43:26.589131 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 29 00:43:26.589306 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 29 00:43:26.589522 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 29 00:43:26.589704 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 29 00:43:26.591690 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 29 00:43:26.591900 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 29 00:43:26.592076 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 29 00:43:26.592256 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 29 00:43:26.592437 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 29 00:43:26.592701 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 29 00:43:26.593103 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 29 00:43:26.593489 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 29 00:43:26.593706 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 29 00:43:26.593926 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 29 00:43:26.594111 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 29 00:43:26.594284 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 29 00:43:26.594296 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 29 00:43:26.594305 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 29 00:43:26.594313 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 00:43:26.594326 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 29 00:43:26.594340 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 29 00:43:26.594348 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 29 00:43:26.594356 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 29 00:43:26.594365 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 29 00:43:26.594373 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 29 00:43:26.594382 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 29 00:43:26.594390 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 29 00:43:26.594401 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 29 00:43:26.594409 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 29 00:43:26.594418 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 29 00:43:26.594426 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 29 00:43:26.594435 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 29 00:43:26.594443 kernel: iommu: Default domain type: Translated Oct 29 00:43:26.594452 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 00:43:26.594462 kernel: efivars: Registered efivars operations Oct 29 00:43:26.594471 kernel: PCI: Using ACPI for IRQ routing Oct 29 00:43:26.594479 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 00:43:26.594488 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Oct 29 00:43:26.594496 kernel: e820: reserve RAM buffer [mem 0x9a100018-0x9bffffff] Oct 29 00:43:26.594504 kernel: e820: reserve RAM buffer [mem 0x9a13d018-0x9bffffff] Oct 29 00:43:26.594521 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Oct 29 00:43:26.594532 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Oct 29 00:43:26.594707 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 29 00:43:26.594921 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 29 00:43:26.595100 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 00:43:26.595111 kernel: vgaarb: loaded Oct 29 00:43:26.595119 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 29 00:43:26.595128 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 29 00:43:26.595140 kernel: clocksource: Switched to clocksource kvm-clock Oct 29 00:43:26.595149 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 00:43:26.595157 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 00:43:26.595166 kernel: pnp: PnP ACPI init Oct 29 00:43:26.595352 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 29 00:43:26.595364 kernel: pnp: PnP ACPI: found 6 devices Oct 29 00:43:26.595376 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 00:43:26.595384 kernel: NET: Registered PF_INET protocol family Oct 29 00:43:26.595393 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 00:43:26.595402 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 00:43:26.595410 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 00:43:26.595419 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 00:43:26.595427 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 00:43:26.595438 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 00:43:26.595446 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:43:26.595455 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:43:26.595463 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 00:43:26.595471 kernel: NET: Registered PF_XDP protocol family Oct 29 00:43:26.595654 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 29 00:43:26.595829 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 29 00:43:26.596015 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 29 00:43:26.596211 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 29 00:43:26.596415 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 29 00:43:26.596586 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 29 00:43:26.596748 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 29 00:43:26.596922 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 29 00:43:26.596940 kernel: PCI: CLS 0 bytes, default 64 Oct 29 00:43:26.596949 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 29 00:43:26.596957 kernel: Initialise system trusted keyrings Oct 29 00:43:26.596966 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 00:43:26.596974 kernel: Key type asymmetric registered Oct 29 00:43:26.596983 kernel: Asymmetric key parser 'x509' registered Oct 29 00:43:26.597009 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 29 00:43:26.597022 kernel: io scheduler mq-deadline registered Oct 29 00:43:26.597031 kernel: io scheduler kyber registered Oct 29 00:43:26.597040 kernel: io scheduler bfq registered Oct 29 00:43:26.597049 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 00:43:26.597058 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 29 00:43:26.597067 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 29 00:43:26.597076 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 29 00:43:26.597085 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 00:43:26.597096 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 00:43:26.597105 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 29 00:43:26.597113 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 00:43:26.597122 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 00:43:26.597131 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 00:43:26.597319 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 29 00:43:26.597494 kernel: rtc_cmos 00:04: registered as rtc0 Oct 29 00:43:26.597674 kernel: rtc_cmos 00:04: setting system clock to 2025-10-29T00:43:24 UTC (1761698604) Oct 29 00:43:26.597840 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 29 00:43:26.597851 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 29 00:43:26.597861 kernel: efifb: probing for efifb Oct 29 00:43:26.597943 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 29 00:43:26.597988 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 29 00:43:26.598010 kernel: efifb: scrolling: redraw Oct 29 00:43:26.598023 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 29 00:43:26.598037 kernel: Console: switching to colour frame buffer device 160x50 Oct 29 00:43:26.598053 kernel: fb0: EFI VGA frame buffer device Oct 29 00:43:26.598070 kernel: pstore: Using crash dump compression: deflate Oct 29 00:43:26.598086 kernel: pstore: Registered efi_pstore as persistent store backend Oct 29 00:43:26.598100 kernel: NET: Registered PF_INET6 protocol family Oct 29 00:43:26.598113 kernel: Segment Routing with IPv6 Oct 29 00:43:26.598126 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 00:43:26.598139 kernel: NET: Registered PF_PACKET protocol family Oct 29 00:43:26.598152 kernel: Key type dns_resolver registered Oct 29 00:43:26.598166 kernel: IPI shorthand broadcast: enabled Oct 29 00:43:26.598182 kernel: sched_clock: Marking stable (1308003059, 261625945)->(1710565581, -140936577) Oct 29 00:43:26.598195 kernel: registered taskstats version 1 Oct 29 00:43:26.598208 kernel: Loading compiled-in X.509 certificates Oct 29 00:43:26.598221 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4eb70affb0e364bb9bcbea2a9416e57c31aed070' Oct 29 00:43:26.598234 kernel: Demotion targets for Node 0: null Oct 29 00:43:26.598248 kernel: Key type .fscrypt registered Oct 29 00:43:26.598260 kernel: Key type fscrypt-provisioning registered Oct 29 00:43:26.598276 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 00:43:26.598289 kernel: ima: Allocated hash algorithm: sha1 Oct 29 00:43:26.598302 kernel: ima: No architecture policies found Oct 29 00:43:26.598316 kernel: clk: Disabling unused clocks Oct 29 00:43:26.598329 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 29 00:43:26.598342 kernel: Write protecting the kernel read-only data: 40960k Oct 29 00:43:26.598355 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 29 00:43:26.598372 kernel: Run /init as init process Oct 29 00:43:26.598384 kernel: with arguments: Oct 29 00:43:26.598398 kernel: /init Oct 29 00:43:26.598410 kernel: with environment: Oct 29 00:43:26.598422 kernel: HOME=/ Oct 29 00:43:26.598436 kernel: TERM=linux Oct 29 00:43:26.598449 kernel: SCSI subsystem initialized Oct 29 00:43:26.598465 kernel: libata version 3.00 loaded. Oct 29 00:43:26.598773 kernel: ahci 0000:00:1f.2: version 3.0 Oct 29 00:43:26.598795 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 29 00:43:26.599055 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 29 00:43:26.599271 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 29 00:43:26.599492 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 29 00:43:26.599859 kernel: scsi host0: ahci Oct 29 00:43:26.600248 kernel: scsi host1: ahci Oct 29 00:43:26.600650 kernel: scsi host2: ahci Oct 29 00:43:26.600940 kernel: scsi host3: ahci Oct 29 00:43:26.601184 kernel: scsi host4: ahci Oct 29 00:43:26.601420 kernel: scsi host5: ahci Oct 29 00:43:26.601438 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 29 00:43:26.601451 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 29 00:43:26.601463 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 29 00:43:26.601475 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 29 00:43:26.601488 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 29 00:43:26.601501 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 29 00:43:26.601530 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 29 00:43:26.601541 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 29 00:43:26.601553 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 29 00:43:26.601565 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 29 00:43:26.601576 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 29 00:43:26.601588 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 29 00:43:26.601600 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 00:43:26.601616 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 29 00:43:26.601628 kernel: ata3.00: applying bridge limits Oct 29 00:43:26.601641 kernel: ata3.00: LPM support broken, forcing max_power Oct 29 00:43:26.601653 kernel: ata3.00: configured for UDMA/100 Oct 29 00:43:26.601943 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 29 00:43:26.602180 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 29 00:43:26.602392 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 29 00:43:26.602407 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 00:43:26.602417 kernel: GPT:16515071 != 27000831 Oct 29 00:43:26.602426 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 00:43:26.602435 kernel: GPT:16515071 != 27000831 Oct 29 00:43:26.602444 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 00:43:26.602453 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:43:26.602467 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:43:26.602683 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 29 00:43:26.602698 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 29 00:43:26.602917 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 29 00:43:26.602935 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 00:43:26.602947 kernel: device-mapper: uevent: version 1.0.3 Oct 29 00:43:26.602960 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 00:43:26.602978 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 29 00:43:26.602991 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:43:26.603003 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:43:26.603015 kernel: raid6: avx2x4 gen() 26604 MB/s Oct 29 00:43:26.603027 kernel: raid6: avx2x2 gen() 27386 MB/s Oct 29 00:43:26.603040 kernel: raid6: avx2x1 gen() 23141 MB/s Oct 29 00:43:26.603052 kernel: raid6: using algorithm avx2x2 gen() 27386 MB/s Oct 29 00:43:26.603067 kernel: raid6: .... xor() 19523 MB/s, rmw enabled Oct 29 00:43:26.603080 kernel: raid6: using avx2x2 recovery algorithm Oct 29 00:43:26.603092 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:43:26.603105 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:43:26.603116 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:43:26.603128 kernel: xor: automatically using best checksumming function avx Oct 29 00:43:26.603140 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:43:26.603152 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 00:43:26.603169 kernel: BTRFS: device fsid c0171910-1eb4-4fd7-b94c-9d6b11be282f devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (176) Oct 29 00:43:26.603184 kernel: BTRFS info (device dm-0): first mount of filesystem c0171910-1eb4-4fd7-b94c-9d6b11be282f Oct 29 00:43:26.603198 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:43:26.603214 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 00:43:26.603229 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 00:43:26.603243 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 29 00:43:26.603255 kernel: loop: module loaded Oct 29 00:43:26.603273 kernel: loop0: detected capacity change from 0 to 100120 Oct 29 00:43:26.603286 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 00:43:26.603301 systemd[1]: Successfully made /usr/ read-only. Oct 29 00:43:26.603320 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 00:43:26.603334 systemd[1]: Detected virtualization kvm. Oct 29 00:43:26.603347 systemd[1]: Detected architecture x86-64. Oct 29 00:43:26.603363 systemd[1]: Running in initrd. Oct 29 00:43:26.603376 systemd[1]: No hostname configured, using default hostname. Oct 29 00:43:26.603391 systemd[1]: Hostname set to . Oct 29 00:43:26.603404 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 00:43:26.603420 systemd[1]: Queued start job for default target initrd.target. Oct 29 00:43:26.603434 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 00:43:26.603447 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:43:26.603466 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:43:26.603481 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 00:43:26.603496 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 00:43:26.603521 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 00:43:26.603536 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 00:43:26.603553 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:43:26.603567 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:43:26.603581 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 00:43:26.603595 systemd[1]: Reached target paths.target - Path Units. Oct 29 00:43:26.603609 systemd[1]: Reached target slices.target - Slice Units. Oct 29 00:43:26.603622 systemd[1]: Reached target swap.target - Swaps. Oct 29 00:43:26.603635 systemd[1]: Reached target timers.target - Timer Units. Oct 29 00:43:26.603652 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 00:43:26.603666 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 00:43:26.603679 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 00:43:26.603693 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 00:43:26.603706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:43:26.603719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 00:43:26.603733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:43:26.603749 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 00:43:26.603764 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 00:43:26.603777 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 00:43:26.603791 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 00:43:26.603804 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 00:43:26.603819 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 00:43:26.603836 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 00:43:26.603850 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 00:43:26.603863 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 00:43:26.603892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:43:26.603907 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 00:43:26.603926 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:43:26.603939 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 00:43:26.603953 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 00:43:26.604008 systemd-journald[310]: Collecting audit messages is disabled. Oct 29 00:43:26.604049 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 00:43:26.604063 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 00:43:26.604077 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 00:43:26.604090 kernel: Bridge firewalling registered Oct 29 00:43:26.604104 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 00:43:26.604121 systemd-journald[310]: Journal started Oct 29 00:43:26.604147 systemd-journald[310]: Runtime Journal (/run/log/journal/3e982396dd734e69bc38bb09f8e9627a) is 5.9M, max 47.9M, 41.9M free. Oct 29 00:43:26.600710 systemd-modules-load[313]: Inserted module 'br_netfilter' Oct 29 00:43:26.607977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 00:43:26.610397 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 00:43:26.619633 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 00:43:26.623982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:43:26.627642 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:43:26.634529 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 00:43:26.640005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:43:26.642455 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 00:43:26.646641 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 00:43:26.649985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:43:26.667080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 00:43:26.670858 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 00:43:26.698656 dracut-cmdline[354]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=54ef1c344b2a47697b32f3227bd37f41d37acb1889c1eaea33b22ce408b7b3ae Oct 29 00:43:26.734320 systemd-resolved[343]: Positive Trust Anchors: Oct 29 00:43:26.734339 systemd-resolved[343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:43:26.734346 systemd-resolved[343]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 00:43:26.734394 systemd-resolved[343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 00:43:26.762332 systemd-resolved[343]: Defaulting to hostname 'linux'. Oct 29 00:43:26.763666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 00:43:26.765824 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:43:26.833920 kernel: Loading iSCSI transport class v2.0-870. Oct 29 00:43:26.848910 kernel: iscsi: registered transport (tcp) Oct 29 00:43:26.871905 kernel: iscsi: registered transport (qla4xxx) Oct 29 00:43:26.871932 kernel: QLogic iSCSI HBA Driver Oct 29 00:43:26.900889 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 00:43:26.921107 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:43:26.921560 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 00:43:26.992951 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 00:43:26.997845 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 00:43:26.998666 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 00:43:27.043798 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 00:43:27.046028 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:43:27.081138 systemd-udevd[591]: Using default interface naming scheme 'v257'. Oct 29 00:43:27.095722 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:43:27.101523 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 00:43:27.137922 dracut-pre-trigger[652]: rd.md=0: removing MD RAID activation Oct 29 00:43:27.144031 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 00:43:27.150100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 00:43:27.172886 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 00:43:27.178214 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 00:43:27.205387 systemd-networkd[710]: lo: Link UP Oct 29 00:43:27.205396 systemd-networkd[710]: lo: Gained carrier Oct 29 00:43:27.206020 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 00:43:27.206828 systemd[1]: Reached target network.target - Network. Oct 29 00:43:27.288805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:43:27.296070 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 00:43:27.331413 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 00:43:27.354436 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 00:43:27.379795 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 00:43:27.390886 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 00:43:27.391722 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 00:43:27.398016 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 00:43:27.408929 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 29 00:43:27.412110 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:43:27.412120 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:43:27.413458 systemd-networkd[710]: eth0: Link UP Oct 29 00:43:27.425571 kernel: AES CTR mode by8 optimization enabled Oct 29 00:43:27.413717 systemd-networkd[710]: eth0: Gained carrier Oct 29 00:43:27.413728 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:43:27.415539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:43:27.415714 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:43:27.422978 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:43:27.435765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:43:27.438932 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:43:27.469690 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:43:27.643118 disk-uuid[793]: Primary Header is updated. Oct 29 00:43:27.643118 disk-uuid[793]: Secondary Entries is updated. Oct 29 00:43:27.643118 disk-uuid[793]: Secondary Header is updated. Oct 29 00:43:27.643672 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 00:43:27.648792 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 00:43:27.650922 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:43:27.651010 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 00:43:27.658696 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 00:43:27.691749 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 00:43:28.686679 disk-uuid[842]: Warning: The kernel is still using the old partition table. Oct 29 00:43:28.686679 disk-uuid[842]: The new table will be used at the next reboot or after you Oct 29 00:43:28.686679 disk-uuid[842]: run partprobe(8) or kpartx(8) Oct 29 00:43:28.686679 disk-uuid[842]: The operation has completed successfully. Oct 29 00:43:28.690036 systemd-networkd[710]: eth0: Gained IPv6LL Oct 29 00:43:28.700263 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 00:43:28.700413 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 00:43:28.703715 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 00:43:28.742172 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (862) Oct 29 00:43:28.742221 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:43:28.742233 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:43:28.747681 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:43:28.747748 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:43:28.755886 kernel: BTRFS info (device vda6): last unmount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:43:28.756434 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 00:43:28.757623 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 00:43:28.876084 ignition[881]: Ignition 2.22.0 Oct 29 00:43:28.876103 ignition[881]: Stage: fetch-offline Oct 29 00:43:28.876159 ignition[881]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:43:28.876171 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:43:28.876259 ignition[881]: parsed url from cmdline: "" Oct 29 00:43:28.876263 ignition[881]: no config URL provided Oct 29 00:43:28.876268 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 00:43:28.876281 ignition[881]: no config at "/usr/lib/ignition/user.ign" Oct 29 00:43:28.876330 ignition[881]: op(1): [started] loading QEMU firmware config module Oct 29 00:43:28.876335 ignition[881]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 00:43:28.887002 ignition[881]: op(1): [finished] loading QEMU firmware config module Oct 29 00:43:28.887030 ignition[881]: QEMU firmware config was not found. Ignoring... Oct 29 00:43:28.970426 ignition[881]: parsing config with SHA512: f4656536902062ac11b0aba9e9c9d613f2e276a0afbf38975dc50bca60b5252a01e37b47e95a7b808164e9675e9bb51cf16deece6cafdffa6614621d0b94ea09 Oct 29 00:43:28.976150 unknown[881]: fetched base config from "system" Oct 29 00:43:28.976535 unknown[881]: fetched user config from "qemu" Oct 29 00:43:28.976942 ignition[881]: fetch-offline: fetch-offline passed Oct 29 00:43:28.979973 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 00:43:28.976998 ignition[881]: Ignition finished successfully Oct 29 00:43:28.983170 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 00:43:28.984202 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 00:43:29.025433 ignition[893]: Ignition 2.22.0 Oct 29 00:43:29.025455 ignition[893]: Stage: kargs Oct 29 00:43:29.025620 ignition[893]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:43:29.025633 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:43:29.026548 ignition[893]: kargs: kargs passed Oct 29 00:43:29.026595 ignition[893]: Ignition finished successfully Oct 29 00:43:29.032388 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 00:43:29.037375 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 00:43:29.069996 ignition[901]: Ignition 2.22.0 Oct 29 00:43:29.070011 ignition[901]: Stage: disks Oct 29 00:43:29.070139 ignition[901]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:43:29.070150 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:43:29.070861 ignition[901]: disks: disks passed Oct 29 00:43:29.075169 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 00:43:29.070926 ignition[901]: Ignition finished successfully Oct 29 00:43:29.075682 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 00:43:29.076332 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 00:43:29.083190 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 00:43:29.085938 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 00:43:29.086255 systemd[1]: Reached target basic.target - Basic System. Oct 29 00:43:29.092108 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 00:43:29.139341 systemd-fsck[911]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 29 00:43:29.147281 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 00:43:29.148644 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 00:43:29.277903 kernel: EXT4-fs (vda9): mounted filesystem ef53721c-fae5-4ad9-8976-8181c84bc175 r/w with ordered data mode. Quota mode: none. Oct 29 00:43:29.278717 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 00:43:29.282068 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 00:43:29.285513 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 00:43:29.288964 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 00:43:29.290668 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 00:43:29.290713 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 00:43:29.290746 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 00:43:29.316511 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 00:43:29.321469 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 00:43:29.330364 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Oct 29 00:43:29.330388 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:43:29.330400 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:43:29.330411 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:43:29.330425 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:43:29.331465 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 00:43:29.400974 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 00:43:29.406656 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Oct 29 00:43:29.412111 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 00:43:29.417146 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 00:43:29.528977 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 00:43:29.533145 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 00:43:29.537932 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 00:43:29.565904 kernel: BTRFS info (device vda6): last unmount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:43:29.580054 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 00:43:29.600690 ignition[1034]: INFO : Ignition 2.22.0 Oct 29 00:43:29.600690 ignition[1034]: INFO : Stage: mount Oct 29 00:43:29.603501 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:43:29.603501 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:43:29.603501 ignition[1034]: INFO : mount: mount passed Oct 29 00:43:29.603501 ignition[1034]: INFO : Ignition finished successfully Oct 29 00:43:29.604994 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 00:43:29.608092 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 00:43:29.730880 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 00:43:29.733139 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 00:43:29.768418 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1046) Oct 29 00:43:29.768524 kernel: BTRFS info (device vda6): first mount of filesystem ba5c42d5-4e97-4410-b3e4-abc54f9b4dae Oct 29 00:43:29.768542 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 00:43:29.774145 kernel: BTRFS info (device vda6): turning on async discard Oct 29 00:43:29.774223 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 00:43:29.775798 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 00:43:29.820597 ignition[1063]: INFO : Ignition 2.22.0 Oct 29 00:43:29.820597 ignition[1063]: INFO : Stage: files Oct 29 00:43:29.823593 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:43:29.823593 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:43:29.823593 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Oct 29 00:43:29.829475 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 00:43:29.829475 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 00:43:29.834041 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 00:43:29.836600 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 00:43:29.836600 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 00:43:29.835527 unknown[1063]: wrote ssh authorized keys file for user: core Oct 29 00:43:29.842906 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 00:43:29.842906 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 29 00:43:29.891406 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 00:43:29.943754 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 29 00:43:29.943754 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 00:43:29.950437 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 00:43:29.950437 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:43:29.950437 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:43:29.950437 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:43:29.950437 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:43:29.950437 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:43:29.950437 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:43:29.950437 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:43:29.950437 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:43:29.950437 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 00:43:29.979833 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 00:43:29.979833 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 00:43:29.979833 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 29 00:43:30.285131 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 00:43:30.697387 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 29 00:43:30.697387 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 00:43:30.703613 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:43:30.755756 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:43:30.755756 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 00:43:30.755756 ignition[1063]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 29 00:43:30.763793 ignition[1063]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:43:30.763793 ignition[1063]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:43:30.763793 ignition[1063]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 29 00:43:30.763793 ignition[1063]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 00:43:30.783349 ignition[1063]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:43:30.791028 ignition[1063]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:43:30.793667 ignition[1063]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 00:43:30.793667 ignition[1063]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 29 00:43:30.793667 ignition[1063]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 00:43:30.793667 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:43:30.793667 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:43:30.793667 ignition[1063]: INFO : files: files passed Oct 29 00:43:30.793667 ignition[1063]: INFO : Ignition finished successfully Oct 29 00:43:30.802358 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 00:43:30.809359 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 00:43:30.815759 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 00:43:30.829309 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 00:43:30.829464 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 00:43:30.836291 initrd-setup-root-after-ignition[1092]: grep: /sysroot/oem/oem-release: No such file or directory Oct 29 00:43:30.841487 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:43:30.841487 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:43:30.849111 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:43:30.844590 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 00:43:30.846816 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 00:43:30.851726 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 00:43:30.895504 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 00:43:30.895651 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 00:43:30.897434 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 00:43:30.901064 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 00:43:30.904642 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 00:43:30.905666 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 00:43:30.950838 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 00:43:30.952436 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 00:43:30.984451 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 00:43:30.984604 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:43:30.990102 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:43:30.990295 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 00:43:30.993925 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 00:43:30.994077 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 00:43:31.001772 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 00:43:31.001958 systemd[1]: Stopped target basic.target - Basic System. Oct 29 00:43:31.005035 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 00:43:31.005578 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 00:43:31.011269 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 00:43:31.016279 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 00:43:31.018050 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 00:43:31.018595 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 00:43:31.024444 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 00:43:31.025247 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 00:43:31.031176 systemd[1]: Stopped target swap.target - Swaps. Oct 29 00:43:31.034322 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 00:43:31.034472 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 00:43:31.039847 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:43:31.041568 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:43:31.044813 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 00:43:31.048414 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:43:31.049990 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 00:43:31.050107 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 00:43:31.056995 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 00:43:31.057123 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 00:43:31.060704 systemd[1]: Stopped target paths.target - Path Units. Oct 29 00:43:31.062327 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 00:43:31.068985 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:43:31.073420 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 00:43:31.073595 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 00:43:31.076367 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 00:43:31.076485 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 00:43:31.076941 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 00:43:31.077049 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 00:43:31.082132 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 00:43:31.082266 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 00:43:31.085664 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 00:43:31.085784 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 00:43:31.094921 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 00:43:31.101077 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 00:43:31.106992 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 00:43:31.109250 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:43:31.114370 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 00:43:31.116367 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:43:31.120944 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 00:43:31.123114 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 00:43:31.127615 ignition[1119]: INFO : Ignition 2.22.0 Oct 29 00:43:31.127615 ignition[1119]: INFO : Stage: umount Oct 29 00:43:31.130349 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:43:31.130349 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:43:31.130349 ignition[1119]: INFO : umount: umount passed Oct 29 00:43:31.130349 ignition[1119]: INFO : Ignition finished successfully Oct 29 00:43:31.131848 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 00:43:31.131996 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 00:43:31.136231 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 00:43:31.145533 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 00:43:31.145654 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 00:43:31.149072 systemd[1]: Stopped target network.target - Network. Oct 29 00:43:31.151042 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 00:43:31.151102 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 00:43:31.153751 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 00:43:31.153811 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 00:43:31.156712 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 00:43:31.156771 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 00:43:31.157253 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 00:43:31.157299 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 00:43:31.165827 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 00:43:31.167341 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 00:43:31.180098 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 00:43:31.180258 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 00:43:31.189520 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 00:43:31.189673 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 00:43:31.197580 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 00:43:31.197766 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 00:43:31.197831 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:43:31.206283 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 00:43:31.206388 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 00:43:31.206457 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 00:43:31.209355 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 00:43:31.209419 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:43:31.209884 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 00:43:31.209933 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 00:43:31.216303 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:43:31.223017 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 00:43:31.227002 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 00:43:31.231667 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 00:43:31.231755 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 00:43:31.244980 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 00:43:31.245173 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:43:31.246837 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 00:43:31.246915 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 00:43:31.250499 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 00:43:31.250538 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:43:31.255177 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 00:43:31.255237 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 00:43:31.261524 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 00:43:31.261581 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 00:43:31.269134 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 00:43:31.269202 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 00:43:31.275356 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 00:43:31.275444 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 00:43:31.275500 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:43:31.279203 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 00:43:31.279255 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:43:31.284387 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 29 00:43:31.284444 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 00:43:31.286791 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 00:43:31.286844 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:43:31.292127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:43:31.292180 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:43:31.294852 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 00:43:31.304007 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 00:43:31.312040 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 00:43:31.312178 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 00:43:31.317623 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 00:43:31.318581 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 00:43:31.331751 systemd[1]: Switching root. Oct 29 00:43:31.377653 systemd-journald[310]: Journal stopped Oct 29 00:43:32.954199 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Oct 29 00:43:32.954275 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 00:43:32.954293 kernel: SELinux: policy capability open_perms=1 Oct 29 00:43:32.954311 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 00:43:32.954341 kernel: SELinux: policy capability always_check_network=0 Oct 29 00:43:32.954353 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 00:43:32.954366 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 00:43:32.954378 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 00:43:32.954391 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 00:43:32.954410 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 00:43:32.954422 kernel: audit: type=1403 audit(1761698612.008:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 00:43:32.954435 systemd[1]: Successfully loaded SELinux policy in 68.882ms. Oct 29 00:43:32.954455 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.178ms. Oct 29 00:43:32.954469 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 00:43:32.954483 systemd[1]: Detected virtualization kvm. Oct 29 00:43:32.954495 systemd[1]: Detected architecture x86-64. Oct 29 00:43:32.954511 systemd[1]: Detected first boot. Oct 29 00:43:32.954525 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 00:43:32.954538 kernel: Guest personality initialized and is inactive Oct 29 00:43:32.954550 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 29 00:43:32.954562 kernel: Initialized host personality Oct 29 00:43:32.954580 zram_generator::config[1166]: No configuration found. Oct 29 00:43:32.954595 kernel: NET: Registered PF_VSOCK protocol family Oct 29 00:43:32.954607 systemd[1]: Populated /etc with preset unit settings. Oct 29 00:43:32.954620 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 00:43:32.954634 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 00:43:32.954648 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 00:43:32.954661 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 00:43:32.954678 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 00:43:32.954693 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 00:43:32.954706 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 00:43:32.954844 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 00:43:32.954865 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 00:43:32.954897 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 00:43:32.954911 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 00:43:32.954925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 00:43:32.954948 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 00:43:32.954962 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 00:43:32.954979 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 00:43:32.954996 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 00:43:32.955012 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 00:43:32.955028 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 29 00:43:32.955048 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 00:43:32.955064 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 00:43:32.955081 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 00:43:32.955097 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 00:43:32.955113 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 00:43:32.955127 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 00:43:32.955140 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 00:43:32.955155 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 00:43:32.955168 systemd[1]: Reached target slices.target - Slice Units. Oct 29 00:43:32.955181 systemd[1]: Reached target swap.target - Swaps. Oct 29 00:43:32.955194 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 00:43:32.955206 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 00:43:32.955222 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 00:43:32.955235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 00:43:32.955247 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 00:43:32.955263 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 00:43:32.955277 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 00:43:32.955290 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 00:43:32.955303 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 00:43:32.955316 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 00:43:32.955338 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:43:32.955352 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 00:43:32.955370 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 00:43:32.955387 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 00:43:32.955405 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 00:43:32.955423 systemd[1]: Reached target machines.target - Containers. Oct 29 00:43:32.955441 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 00:43:32.955458 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:43:32.955478 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 00:43:32.955491 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 00:43:32.955504 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:43:32.955517 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 00:43:32.955530 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:43:32.955544 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 00:43:32.955557 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:43:32.955573 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 00:43:32.955587 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 00:43:32.955600 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 00:43:32.955613 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 00:43:32.955626 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 00:43:32.955640 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:43:32.955653 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 00:43:32.955669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 00:43:32.955682 kernel: fuse: init (API version 7.41) Oct 29 00:43:32.955697 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 00:43:32.955710 kernel: ACPI: bus type drm_connector registered Oct 29 00:43:32.955726 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 00:43:32.955739 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 00:43:32.955752 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 00:43:32.955765 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:43:32.955778 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 00:43:32.955818 systemd-journald[1244]: Collecting audit messages is disabled. Oct 29 00:43:32.955849 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 00:43:32.955862 systemd-journald[1244]: Journal started Oct 29 00:43:32.955903 systemd-journald[1244]: Runtime Journal (/run/log/journal/3e982396dd734e69bc38bb09f8e9627a) is 5.9M, max 47.9M, 41.9M free. Oct 29 00:43:32.589567 systemd[1]: Queued start job for default target multi-user.target. Oct 29 00:43:32.606066 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 00:43:32.606675 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 00:43:32.958899 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 00:43:32.962150 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 00:43:32.963953 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 00:43:32.966022 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 00:43:32.968032 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 00:43:32.970064 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 00:43:32.972446 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 00:43:32.974758 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 00:43:32.975004 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 00:43:32.977264 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:43:32.977495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:43:32.979689 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:43:32.979945 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 00:43:32.982069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:43:32.982291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:43:32.984631 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 00:43:32.984930 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 00:43:32.987219 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:43:32.987467 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:43:32.989741 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 00:43:32.992058 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 00:43:32.995613 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 00:43:32.998179 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 00:43:33.015841 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 00:43:33.018254 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 00:43:33.021587 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 00:43:33.024585 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 00:43:33.026416 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 00:43:33.026446 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 00:43:33.029018 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 00:43:33.031381 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:43:33.035017 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 00:43:33.038088 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 00:43:33.040035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:43:33.044176 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 00:43:33.046264 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:43:33.050201 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 00:43:33.054496 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 00:43:33.057373 systemd-journald[1244]: Time spent on flushing to /var/log/journal/3e982396dd734e69bc38bb09f8e9627a is 22.850ms for 1031 entries. Oct 29 00:43:33.057373 systemd-journald[1244]: System Journal (/var/log/journal/3e982396dd734e69bc38bb09f8e9627a) is 8M, max 163.5M, 155.5M free. Oct 29 00:43:33.105882 systemd-journald[1244]: Received client request to flush runtime journal. Oct 29 00:43:33.105956 kernel: loop1: detected capacity change from 0 to 219144 Oct 29 00:43:33.058983 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 00:43:33.060263 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 00:43:33.061754 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 00:43:33.062501 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 00:43:33.072751 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 00:43:33.074809 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 00:43:33.080055 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 00:43:33.087849 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Oct 29 00:43:33.087863 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Oct 29 00:43:33.094028 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 00:43:33.099277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 00:43:33.103618 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 00:43:33.108803 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 00:43:33.111897 kernel: loop2: detected capacity change from 0 to 110976 Oct 29 00:43:33.121273 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 00:43:33.140904 kernel: loop3: detected capacity change from 0 to 128048 Oct 29 00:43:33.148235 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 00:43:33.152518 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 00:43:33.155213 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 00:43:33.176728 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 00:43:33.183899 kernel: loop4: detected capacity change from 0 to 219144 Oct 29 00:43:33.188456 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Oct 29 00:43:33.188822 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Oct 29 00:43:33.197036 kernel: loop5: detected capacity change from 0 to 110976 Oct 29 00:43:33.194027 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 00:43:33.207893 kernel: loop6: detected capacity change from 0 to 128048 Oct 29 00:43:33.214405 (sd-merge)[1310]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 29 00:43:33.218666 (sd-merge)[1310]: Merged extensions into '/usr'. Oct 29 00:43:33.223728 systemd[1]: Reload requested from client PID 1285 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 00:43:33.223748 systemd[1]: Reloading... Oct 29 00:43:33.280916 zram_generator::config[1345]: No configuration found. Oct 29 00:43:33.310573 systemd-resolved[1306]: Positive Trust Anchors: Oct 29 00:43:33.310592 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:43:33.310600 systemd-resolved[1306]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 00:43:33.310632 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 00:43:33.314688 systemd-resolved[1306]: Defaulting to hostname 'linux'. Oct 29 00:43:33.484644 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 00:43:33.484984 systemd[1]: Reloading finished in 260 ms. Oct 29 00:43:33.516334 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 00:43:33.518707 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 00:43:33.521130 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 00:43:33.526720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 00:43:33.550836 systemd[1]: Starting ensure-sysext.service... Oct 29 00:43:33.553637 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 00:43:33.571064 systemd[1]: Reload requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)... Oct 29 00:43:33.571091 systemd[1]: Reloading... Oct 29 00:43:33.573130 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 00:43:33.573174 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 00:43:33.573523 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 00:43:33.573794 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 00:43:33.574741 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 00:43:33.575049 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Oct 29 00:43:33.575123 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Oct 29 00:43:33.581043 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 00:43:33.581056 systemd-tmpfiles[1381]: Skipping /boot Oct 29 00:43:33.592111 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 00:43:33.592243 systemd-tmpfiles[1381]: Skipping /boot Oct 29 00:43:33.632928 zram_generator::config[1411]: No configuration found. Oct 29 00:43:33.837965 systemd[1]: Reloading finished in 266 ms. Oct 29 00:43:33.865846 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 00:43:33.887511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 00:43:33.899010 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:43:33.902123 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 00:43:33.905299 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 00:43:33.911453 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 00:43:33.916527 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 00:43:33.922944 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 00:43:33.927859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:43:33.928054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:43:33.930215 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 00:43:33.934140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 00:43:33.938290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 00:43:33.940372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:43:33.940480 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:43:33.940570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:43:33.944037 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:43:33.944262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:43:33.944435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:43:33.944520 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:43:33.944602 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:43:33.953648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:43:33.954934 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 00:43:33.956581 systemd-udevd[1454]: Using default interface naming scheme 'v257'. Oct 29 00:43:33.958112 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 00:43:33.961833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:43:33.962092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 00:43:33.968725 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:43:33.969011 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 00:43:33.975849 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 00:43:33.984228 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:43:33.984573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 00:43:33.988172 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 00:43:33.990610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 00:43:33.990657 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 00:43:33.990710 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:43:33.990761 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 00:43:33.990801 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 00:43:33.991310 systemd[1]: Finished ensure-sysext.service. Oct 29 00:43:33.996274 augenrules[1485]: No rules Oct 29 00:43:34.004092 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 00:43:34.006819 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:43:34.007124 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:43:34.009451 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:43:34.009672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 00:43:34.011742 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 00:43:34.025084 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 00:43:34.035959 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 00:43:34.038573 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:43:34.091676 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 29 00:43:34.120802 systemd-networkd[1502]: lo: Link UP Oct 29 00:43:34.120813 systemd-networkd[1502]: lo: Gained carrier Oct 29 00:43:34.123085 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 00:43:34.125307 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 00:43:34.127525 systemd[1]: Reached target network.target - Network. Oct 29 00:43:34.129290 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 00:43:34.135004 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 00:43:34.140048 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 00:43:34.146274 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 00:43:34.150004 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 00:43:34.151522 systemd-networkd[1502]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:43:34.151527 systemd-networkd[1502]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:43:34.155965 systemd-networkd[1502]: eth0: Link UP Oct 29 00:43:34.157189 systemd-networkd[1502]: eth0: Gained carrier Oct 29 00:43:34.157213 systemd-networkd[1502]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 00:43:34.171948 systemd-networkd[1502]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:43:34.174474 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. Oct 29 00:43:35.337908 systemd-resolved[1306]: Clock change detected. Flushing caches. Oct 29 00:43:35.337995 systemd-timesyncd[1490]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 00:43:35.338050 systemd-timesyncd[1490]: Initial clock synchronization to Wed 2025-10-29 00:43:35.337428 UTC. Oct 29 00:43:35.338144 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 00:43:35.341665 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 00:43:35.362609 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 29 00:43:35.367598 kernel: ACPI: button: Power Button [PWRF] Oct 29 00:43:35.398606 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 00:43:35.404939 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 29 00:43:35.406593 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 29 00:43:35.408592 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 29 00:43:35.502811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:43:35.515391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:43:35.515809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:43:35.525884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 00:43:35.594202 kernel: kvm_amd: TSC scaling supported Oct 29 00:43:35.594262 kernel: kvm_amd: Nested Virtualization enabled Oct 29 00:43:35.594302 kernel: kvm_amd: Nested Paging enabled Oct 29 00:43:35.594316 kernel: kvm_amd: LBR virtualization supported Oct 29 00:43:35.594329 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 29 00:43:35.594341 kernel: kvm_amd: Virtual GIF supported Oct 29 00:43:35.641122 ldconfig[1452]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 00:43:35.644611 kernel: EDAC MC: Ver: 3.0.0 Oct 29 00:43:35.648115 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 00:43:35.650677 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 00:43:35.666960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 00:43:35.677484 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 00:43:35.679511 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 00:43:35.681405 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 00:43:35.683481 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 00:43:35.685552 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 29 00:43:35.687666 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 00:43:35.689553 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 00:43:35.691677 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 00:43:35.693975 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 00:43:35.694017 systemd[1]: Reached target paths.target - Path Units. Oct 29 00:43:35.695627 systemd[1]: Reached target timers.target - Timer Units. Oct 29 00:43:35.698329 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 00:43:35.702107 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 00:43:35.705877 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 00:43:35.708094 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 00:43:35.710191 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 00:43:35.716124 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 00:43:35.718131 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 00:43:35.720736 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 00:43:35.723277 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 00:43:35.724878 systemd[1]: Reached target basic.target - Basic System. Oct 29 00:43:35.726458 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 00:43:35.726492 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 00:43:35.727591 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 00:43:35.730472 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 00:43:35.733076 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 00:43:35.736047 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 00:43:35.738744 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 00:43:35.740420 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 00:43:35.741688 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 29 00:43:35.744725 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 00:43:35.745972 jq[1567]: false Oct 29 00:43:35.747460 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 00:43:35.751265 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 00:43:35.755120 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 00:43:35.762065 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 00:43:35.765359 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Oct 29 00:43:35.765125 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Oct 29 00:43:35.763763 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 00:43:35.764210 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 00:43:35.765832 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 00:43:35.769829 extend-filesystems[1568]: Found /dev/vda6 Oct 29 00:43:35.776530 extend-filesystems[1568]: Found /dev/vda9 Oct 29 00:43:35.769859 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 00:43:35.778464 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Oct 29 00:43:35.778464 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 00:43:35.778012 oslogin_cache_refresh[1569]: Failure getting users, quitting Oct 29 00:43:35.778030 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 29 00:43:35.779215 extend-filesystems[1568]: Checking size of /dev/vda9 Oct 29 00:43:35.780368 oslogin_cache_refresh[1569]: Refreshing group entry cache Oct 29 00:43:35.779999 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 00:43:35.780716 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Oct 29 00:43:35.784279 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 00:43:35.784545 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 00:43:35.786458 jq[1584]: true Oct 29 00:43:35.784897 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 00:43:35.785156 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 00:43:35.787245 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Oct 29 00:43:35.787245 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 00:43:35.787236 oslogin_cache_refresh[1569]: Failure getting groups, quitting Oct 29 00:43:35.787248 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 29 00:43:35.788013 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 00:43:35.788270 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 00:43:35.789948 extend-filesystems[1568]: Resized partition /dev/vda9 Oct 29 00:43:35.790997 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 29 00:43:35.791239 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 29 00:43:35.793127 extend-filesystems[1598]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 00:43:35.798154 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 29 00:43:35.810260 update_engine[1581]: I20251029 00:43:35.810146 1581 main.cc:92] Flatcar Update Engine starting Oct 29 00:43:35.818288 jq[1600]: true Oct 29 00:43:35.820377 (ntainerd)[1613]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 29 00:43:35.853717 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 29 00:43:35.839982 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 00:43:35.839759 dbus-daemon[1565]: [system] SELinux support is enabled Oct 29 00:43:35.854125 update_engine[1581]: I20251029 00:43:35.847499 1581 update_check_scheduler.cc:74] Next update check in 7m30s Oct 29 00:43:35.844233 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 00:43:35.844256 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 00:43:35.846824 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 00:43:35.846845 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 00:43:35.855843 extend-filesystems[1598]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 00:43:35.855843 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 00:43:35.855843 extend-filesystems[1598]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 29 00:43:35.871653 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Oct 29 00:43:35.862263 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 00:43:35.862594 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 00:43:35.879565 systemd[1]: Started update-engine.service - Update Engine. Oct 29 00:43:35.881211 tar[1594]: linux-amd64/LICENSE Oct 29 00:43:35.881211 tar[1594]: linux-amd64/helm Oct 29 00:43:35.888680 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 00:43:35.898798 systemd-logind[1578]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 00:43:35.898828 systemd-logind[1578]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 00:43:35.899185 systemd-logind[1578]: New seat seat0. Oct 29 00:43:35.919756 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 00:43:35.924786 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Oct 29 00:43:35.927142 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 00:43:35.932114 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 00:43:35.984802 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 00:43:35.991292 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 00:43:36.016197 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 00:43:36.021010 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 00:43:36.038545 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 00:43:36.038876 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 00:43:36.042773 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 00:43:36.065895 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 00:43:36.071118 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 00:43:36.077090 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 29 00:43:36.079536 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 00:43:36.090060 containerd[1613]: time="2025-10-29T00:43:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 00:43:36.091143 containerd[1613]: time="2025-10-29T00:43:36.091100586Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 00:43:36.099811 containerd[1613]: time="2025-10-29T00:43:36.099755097Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.797µs" Oct 29 00:43:36.100607 containerd[1613]: time="2025-10-29T00:43:36.099956324Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 00:43:36.100607 containerd[1613]: time="2025-10-29T00:43:36.099994596Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 00:43:36.100607 containerd[1613]: time="2025-10-29T00:43:36.100161108Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 00:43:36.100607 containerd[1613]: time="2025-10-29T00:43:36.100177759Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 00:43:36.100607 containerd[1613]: time="2025-10-29T00:43:36.100208307Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 00:43:36.100607 containerd[1613]: time="2025-10-29T00:43:36.100276454Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 00:43:36.100607 containerd[1613]: time="2025-10-29T00:43:36.100290841Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 00:43:36.100771 containerd[1613]: time="2025-10-29T00:43:36.100611663Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 00:43:36.100771 containerd[1613]: time="2025-10-29T00:43:36.100632632Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 00:43:36.100771 containerd[1613]: time="2025-10-29T00:43:36.100649644Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 00:43:36.100771 containerd[1613]: time="2025-10-29T00:43:36.100659853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 00:43:36.100856 containerd[1613]: time="2025-10-29T00:43:36.100769990Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 00:43:36.101122 containerd[1613]: time="2025-10-29T00:43:36.101096272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 00:43:36.101160 containerd[1613]: time="2025-10-29T00:43:36.101137499Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 00:43:36.101160 containerd[1613]: time="2025-10-29T00:43:36.101147748Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 00:43:36.101232 containerd[1613]: time="2025-10-29T00:43:36.101176422Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 00:43:36.101393 containerd[1613]: time="2025-10-29T00:43:36.101372971Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 00:43:36.101460 containerd[1613]: time="2025-10-29T00:43:36.101440698Z" level=info msg="metadata content store policy set" policy=shared Oct 29 00:43:36.106502 containerd[1613]: time="2025-10-29T00:43:36.106468296Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 00:43:36.106543 containerd[1613]: time="2025-10-29T00:43:36.106511758Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 00:43:36.106543 containerd[1613]: time="2025-10-29T00:43:36.106526175Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 00:43:36.106543 containerd[1613]: time="2025-10-29T00:43:36.106536504Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 00:43:36.106689 containerd[1613]: time="2025-10-29T00:43:36.106548747Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 00:43:36.106689 containerd[1613]: time="2025-10-29T00:43:36.106560068Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 00:43:36.106689 containerd[1613]: time="2025-10-29T00:43:36.106585757Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 00:43:36.106689 containerd[1613]: time="2025-10-29T00:43:36.106597208Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 00:43:36.106689 containerd[1613]: time="2025-10-29T00:43:36.106606766Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 00:43:36.106689 containerd[1613]: time="2025-10-29T00:43:36.106616945Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 00:43:36.106689 containerd[1613]: time="2025-10-29T00:43:36.106627866Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 00:43:36.106689 containerd[1613]: time="2025-10-29T00:43:36.106639247Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 00:43:36.106826 containerd[1613]: time="2025-10-29T00:43:36.106749133Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 00:43:36.106826 containerd[1613]: time="2025-10-29T00:43:36.106766355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 00:43:36.106826 containerd[1613]: time="2025-10-29T00:43:36.106780792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 00:43:36.106826 containerd[1613]: time="2025-10-29T00:43:36.106791522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 00:43:36.106826 containerd[1613]: time="2025-10-29T00:43:36.106808645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 00:43:36.106826 containerd[1613]: time="2025-10-29T00:43:36.106819956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 00:43:36.106935 containerd[1613]: time="2025-10-29T00:43:36.106831377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 00:43:36.106935 containerd[1613]: time="2025-10-29T00:43:36.106843109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 00:43:36.106935 containerd[1613]: time="2025-10-29T00:43:36.106854671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 00:43:36.106935 containerd[1613]: time="2025-10-29T00:43:36.106865742Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 00:43:36.106935 containerd[1613]: time="2025-10-29T00:43:36.106876632Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 00:43:36.106935 containerd[1613]: time="2025-10-29T00:43:36.106934180Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 00:43:36.107054 containerd[1613]: time="2025-10-29T00:43:36.106952935Z" level=info msg="Start snapshots syncer" Oct 29 00:43:36.107054 containerd[1613]: time="2025-10-29T00:43:36.107011395Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 00:43:36.107319 containerd[1613]: time="2025-10-29T00:43:36.107271913Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 00:43:36.107532 containerd[1613]: time="2025-10-29T00:43:36.107502045Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 00:43:36.107612 containerd[1613]: time="2025-10-29T00:43:36.107594448Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 00:43:36.107741 containerd[1613]: time="2025-10-29T00:43:36.107715646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 00:43:36.107741 containerd[1613]: time="2025-10-29T00:43:36.107739761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 00:43:36.107792 containerd[1613]: time="2025-10-29T00:43:36.107749559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 00:43:36.107792 containerd[1613]: time="2025-10-29T00:43:36.107760039Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 00:43:36.107792 containerd[1613]: time="2025-10-29T00:43:36.107769867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 00:43:36.107792 containerd[1613]: time="2025-10-29T00:43:36.107779646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 00:43:36.107792 containerd[1613]: time="2025-10-29T00:43:36.107790466Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 00:43:36.107876 containerd[1613]: time="2025-10-29T00:43:36.107818158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 00:43:36.107876 containerd[1613]: time="2025-10-29T00:43:36.107830160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 00:43:36.107876 containerd[1613]: time="2025-10-29T00:43:36.107840339Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 00:43:36.107933 containerd[1613]: time="2025-10-29T00:43:36.107876948Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 00:43:36.107933 containerd[1613]: time="2025-10-29T00:43:36.107889862Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 00:43:36.107933 containerd[1613]: time="2025-10-29T00:43:36.107898529Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 00:43:36.107933 containerd[1613]: time="2025-10-29T00:43:36.107907415Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 00:43:36.107933 containerd[1613]: time="2025-10-29T00:43:36.107915400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 00:43:36.107933 containerd[1613]: time="2025-10-29T00:43:36.107924678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 00:43:36.107933 containerd[1613]: time="2025-10-29T00:43:36.107933965Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 00:43:36.108063 containerd[1613]: time="2025-10-29T00:43:36.107951207Z" level=info msg="runtime interface created" Oct 29 00:43:36.108063 containerd[1613]: time="2025-10-29T00:43:36.107956898Z" level=info msg="created NRI interface" Oct 29 00:43:36.108063 containerd[1613]: time="2025-10-29T00:43:36.107969462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 00:43:36.108063 containerd[1613]: time="2025-10-29T00:43:36.107988147Z" level=info msg="Connect containerd service" Oct 29 00:43:36.108063 containerd[1613]: time="2025-10-29T00:43:36.108010639Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 00:43:36.108874 containerd[1613]: time="2025-10-29T00:43:36.108836498Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 00:43:36.209963 tar[1594]: linux-amd64/README.md Oct 29 00:43:36.231674 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 00:43:36.243617 containerd[1613]: time="2025-10-29T00:43:36.243553370Z" level=info msg="Start subscribing containerd event" Oct 29 00:43:36.243706 containerd[1613]: time="2025-10-29T00:43:36.243645162Z" level=info msg="Start recovering state" Oct 29 00:43:36.243824 containerd[1613]: time="2025-10-29T00:43:36.243780285Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 00:43:36.243850 containerd[1613]: time="2025-10-29T00:43:36.243801916Z" level=info msg="Start event monitor" Oct 29 00:43:36.243870 containerd[1613]: time="2025-10-29T00:43:36.243860135Z" level=info msg="Start cni network conf syncer for default" Oct 29 00:43:36.243890 containerd[1613]: time="2025-10-29T00:43:36.243871446Z" level=info msg="Start streaming server" Oct 29 00:43:36.243931 containerd[1613]: time="2025-10-29T00:43:36.243859514Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 00:43:36.243970 containerd[1613]: time="2025-10-29T00:43:36.243889811Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 00:43:36.243970 containerd[1613]: time="2025-10-29T00:43:36.243960804Z" level=info msg="runtime interface starting up..." Oct 29 00:43:36.244018 containerd[1613]: time="2025-10-29T00:43:36.243970292Z" level=info msg="starting plugins..." Oct 29 00:43:36.244038 containerd[1613]: time="2025-10-29T00:43:36.244026477Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 00:43:36.244204 containerd[1613]: time="2025-10-29T00:43:36.244183732Z" level=info msg="containerd successfully booted in 0.154686s" Oct 29 00:43:36.244343 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 00:43:36.439836 systemd-networkd[1502]: eth0: Gained IPv6LL Oct 29 00:43:36.443379 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 00:43:36.446650 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 00:43:36.450657 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 29 00:43:36.454322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:43:36.466070 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 00:43:36.495347 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 00:43:36.495795 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 29 00:43:36.499013 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 00:43:36.502760 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 00:43:37.178463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:43:37.181108 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 00:43:37.183264 systemd[1]: Startup finished in 2.944s (kernel) + 5.774s (initrd) + 4.081s (userspace) = 12.800s. Oct 29 00:43:37.191044 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:43:37.560927 kubelet[1707]: E1029 00:43:37.560849 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:43:37.565399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:43:37.565620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:43:37.566021 systemd[1]: kubelet.service: Consumed 943ms CPU time, 257.1M memory peak. Oct 29 00:43:39.009394 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 00:43:39.010758 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:57384.service - OpenSSH per-connection server daemon (10.0.0.1:57384). Oct 29 00:43:39.097393 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 57384 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:39.099698 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:39.107149 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 00:43:39.108372 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 00:43:39.114438 systemd-logind[1578]: New session 1 of user core. Oct 29 00:43:39.134630 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 00:43:39.137913 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 00:43:39.160041 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:39.162713 systemd-logind[1578]: New session c1 of user core. Oct 29 00:43:39.309893 systemd[1725]: Queued start job for default target default.target. Oct 29 00:43:39.326962 systemd[1725]: Created slice app.slice - User Application Slice. Oct 29 00:43:39.326991 systemd[1725]: Reached target paths.target - Paths. Oct 29 00:43:39.327037 systemd[1725]: Reached target timers.target - Timers. Oct 29 00:43:39.328667 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 00:43:39.340492 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 00:43:39.340725 systemd[1725]: Reached target sockets.target - Sockets. Oct 29 00:43:39.340802 systemd[1725]: Reached target basic.target - Basic System. Oct 29 00:43:39.340878 systemd[1725]: Reached target default.target - Main User Target. Oct 29 00:43:39.340945 systemd[1725]: Startup finished in 171ms. Oct 29 00:43:39.341044 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 00:43:39.342662 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 00:43:39.410200 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:57386.service - OpenSSH per-connection server daemon (10.0.0.1:57386). Oct 29 00:43:39.462019 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 57386 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:39.463290 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:39.468447 systemd-logind[1578]: New session 2 of user core. Oct 29 00:43:39.478716 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 00:43:39.532872 sshd[1739]: Connection closed by 10.0.0.1 port 57386 Oct 29 00:43:39.533208 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:39.546411 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:57386.service: Deactivated successfully. Oct 29 00:43:39.548158 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 00:43:39.548938 systemd-logind[1578]: Session 2 logged out. Waiting for processes to exit. Oct 29 00:43:39.551467 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:57394.service - OpenSSH per-connection server daemon (10.0.0.1:57394). Oct 29 00:43:39.552049 systemd-logind[1578]: Removed session 2. Oct 29 00:43:39.616629 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 57394 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:39.618030 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:39.622368 systemd-logind[1578]: New session 3 of user core. Oct 29 00:43:39.635702 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 00:43:39.686227 sshd[1749]: Connection closed by 10.0.0.1 port 57394 Oct 29 00:43:39.686622 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:39.700255 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:57394.service: Deactivated successfully. Oct 29 00:43:39.702134 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 00:43:39.702828 systemd-logind[1578]: Session 3 logged out. Waiting for processes to exit. Oct 29 00:43:39.705509 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:57406.service - OpenSSH per-connection server daemon (10.0.0.1:57406). Oct 29 00:43:39.706130 systemd-logind[1578]: Removed session 3. Oct 29 00:43:39.762526 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 57406 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:39.763968 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:39.768339 systemd-logind[1578]: New session 4 of user core. Oct 29 00:43:39.774697 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 00:43:39.827867 sshd[1758]: Connection closed by 10.0.0.1 port 57406 Oct 29 00:43:39.828217 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:39.835883 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:57406.service: Deactivated successfully. Oct 29 00:43:39.837687 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 00:43:39.838384 systemd-logind[1578]: Session 4 logged out. Waiting for processes to exit. Oct 29 00:43:39.841073 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:57408.service - OpenSSH per-connection server daemon (10.0.0.1:57408). Oct 29 00:43:39.841717 systemd-logind[1578]: Removed session 4. Oct 29 00:43:39.898132 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 57408 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:39.899458 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:39.903898 systemd-logind[1578]: New session 5 of user core. Oct 29 00:43:39.917702 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 00:43:39.978831 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 00:43:39.979151 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:43:40.002326 sudo[1768]: pam_unix(sudo:session): session closed for user root Oct 29 00:43:40.004125 sshd[1767]: Connection closed by 10.0.0.1 port 57408 Oct 29 00:43:40.004543 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:40.017420 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:57408.service: Deactivated successfully. Oct 29 00:43:40.019336 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 00:43:40.020127 systemd-logind[1578]: Session 5 logged out. Waiting for processes to exit. Oct 29 00:43:40.023043 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:34468.service - OpenSSH per-connection server daemon (10.0.0.1:34468). Oct 29 00:43:40.023615 systemd-logind[1578]: Removed session 5. Oct 29 00:43:40.075221 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 34468 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:40.076692 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:40.081113 systemd-logind[1578]: New session 6 of user core. Oct 29 00:43:40.090718 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 00:43:40.145218 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 00:43:40.145514 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:43:40.153166 sudo[1780]: pam_unix(sudo:session): session closed for user root Oct 29 00:43:40.160201 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 00:43:40.160622 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:43:40.170404 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 00:43:40.222640 augenrules[1802]: No rules Oct 29 00:43:40.224390 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 00:43:40.224705 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 00:43:40.225824 sudo[1779]: pam_unix(sudo:session): session closed for user root Oct 29 00:43:40.227491 sshd[1778]: Connection closed by 10.0.0.1 port 34468 Oct 29 00:43:40.227820 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:40.245308 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:34468.service: Deactivated successfully. Oct 29 00:43:40.247087 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 00:43:40.247889 systemd-logind[1578]: Session 6 logged out. Waiting for processes to exit. Oct 29 00:43:40.250679 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:34482.service - OpenSSH per-connection server daemon (10.0.0.1:34482). Oct 29 00:43:40.251245 systemd-logind[1578]: Removed session 6. Oct 29 00:43:40.316984 sshd[1811]: Accepted publickey for core from 10.0.0.1 port 34482 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:43:40.318298 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:43:40.322802 systemd-logind[1578]: New session 7 of user core. Oct 29 00:43:40.332714 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 00:43:40.386568 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 00:43:40.386910 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 00:43:40.759948 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 00:43:40.772881 (dockerd)[1835]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 00:43:41.038741 dockerd[1835]: time="2025-10-29T00:43:41.038611636Z" level=info msg="Starting up" Oct 29 00:43:41.039469 dockerd[1835]: time="2025-10-29T00:43:41.039425612Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 00:43:41.051691 dockerd[1835]: time="2025-10-29T00:43:41.051653955Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 00:43:41.703649 dockerd[1835]: time="2025-10-29T00:43:41.703553659Z" level=info msg="Loading containers: start." Oct 29 00:43:41.714605 kernel: Initializing XFRM netlink socket Oct 29 00:43:41.991938 systemd-networkd[1502]: docker0: Link UP Oct 29 00:43:41.997609 dockerd[1835]: time="2025-10-29T00:43:41.997540620Z" level=info msg="Loading containers: done." Oct 29 00:43:42.011766 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck42582622-merged.mount: Deactivated successfully. Oct 29 00:43:42.014590 dockerd[1835]: time="2025-10-29T00:43:42.014527977Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 00:43:42.014716 dockerd[1835]: time="2025-10-29T00:43:42.014659544Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 00:43:42.014787 dockerd[1835]: time="2025-10-29T00:43:42.014761395Z" level=info msg="Initializing buildkit" Oct 29 00:43:42.044688 dockerd[1835]: time="2025-10-29T00:43:42.044643775Z" level=info msg="Completed buildkit initialization" Oct 29 00:43:42.050767 dockerd[1835]: time="2025-10-29T00:43:42.050723406Z" level=info msg="Daemon has completed initialization" Oct 29 00:43:42.050854 dockerd[1835]: time="2025-10-29T00:43:42.050776045Z" level=info msg="API listen on /run/docker.sock" Oct 29 00:43:42.050955 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 00:43:43.502879 containerd[1613]: time="2025-10-29T00:43:43.502835044Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 29 00:43:44.870330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349212575.mount: Deactivated successfully. Oct 29 00:43:46.237416 containerd[1613]: time="2025-10-29T00:43:46.237341840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:46.238146 containerd[1613]: time="2025-10-29T00:43:46.238099852Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 29 00:43:46.242345 containerd[1613]: time="2025-10-29T00:43:46.242304778Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:46.245073 containerd[1613]: time="2025-10-29T00:43:46.245020441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:46.247415 containerd[1613]: time="2025-10-29T00:43:46.247036742Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.744157445s" Oct 29 00:43:46.247415 containerd[1613]: time="2025-10-29T00:43:46.247080845Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 29 00:43:46.247810 containerd[1613]: time="2025-10-29T00:43:46.247756993Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 29 00:43:47.731829 containerd[1613]: time="2025-10-29T00:43:47.731756431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:47.732449 containerd[1613]: time="2025-10-29T00:43:47.732402964Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 29 00:43:47.733635 containerd[1613]: time="2025-10-29T00:43:47.733604016Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:47.736082 containerd[1613]: time="2025-10-29T00:43:47.736051145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:47.737032 containerd[1613]: time="2025-10-29T00:43:47.737005014Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.48920448s" Oct 29 00:43:47.737088 containerd[1613]: time="2025-10-29T00:43:47.737038156Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 29 00:43:47.737616 containerd[1613]: time="2025-10-29T00:43:47.737563121Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 29 00:43:47.816026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 00:43:47.817865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:43:48.079868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:43:48.084184 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:43:48.305864 kubelet[2125]: E1029 00:43:48.305803 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:43:48.312563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:43:48.312798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:43:48.313200 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110.6M memory peak. Oct 29 00:43:49.627232 containerd[1613]: time="2025-10-29T00:43:49.627166321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:49.627978 containerd[1613]: time="2025-10-29T00:43:49.627940824Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 29 00:43:49.629116 containerd[1613]: time="2025-10-29T00:43:49.629070993Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:49.633208 containerd[1613]: time="2025-10-29T00:43:49.631836199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:49.633791 containerd[1613]: time="2025-10-29T00:43:49.633752884Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.896140821s" Oct 29 00:43:49.634035 containerd[1613]: time="2025-10-29T00:43:49.633871726Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 29 00:43:49.634655 containerd[1613]: time="2025-10-29T00:43:49.634423641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 29 00:43:50.812685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4069301364.mount: Deactivated successfully. Oct 29 00:43:51.218163 containerd[1613]: time="2025-10-29T00:43:51.218002006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:51.218965 containerd[1613]: time="2025-10-29T00:43:51.218912494Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 29 00:43:51.220259 containerd[1613]: time="2025-10-29T00:43:51.220221318Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:51.222173 containerd[1613]: time="2025-10-29T00:43:51.222131070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:51.222676 containerd[1613]: time="2025-10-29T00:43:51.222636387Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.588175125s" Oct 29 00:43:51.222710 containerd[1613]: time="2025-10-29T00:43:51.222676062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 29 00:43:51.223212 containerd[1613]: time="2025-10-29T00:43:51.223177211Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 29 00:43:51.715436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount278755523.mount: Deactivated successfully. Oct 29 00:43:52.868081 containerd[1613]: time="2025-10-29T00:43:52.868007135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:52.868839 containerd[1613]: time="2025-10-29T00:43:52.868815491Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 29 00:43:52.870196 containerd[1613]: time="2025-10-29T00:43:52.870139033Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:52.872833 containerd[1613]: time="2025-10-29T00:43:52.872765098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:52.873986 containerd[1613]: time="2025-10-29T00:43:52.873935663Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.650729857s" Oct 29 00:43:52.873986 containerd[1613]: time="2025-10-29T00:43:52.873964517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 29 00:43:52.874710 containerd[1613]: time="2025-10-29T00:43:52.874388572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 29 00:43:53.654256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount748111376.mount: Deactivated successfully. Oct 29 00:43:53.659973 containerd[1613]: time="2025-10-29T00:43:53.659933730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:53.660679 containerd[1613]: time="2025-10-29T00:43:53.660643932Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 29 00:43:53.661775 containerd[1613]: time="2025-10-29T00:43:53.661749435Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:53.663784 containerd[1613]: time="2025-10-29T00:43:53.663737403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:53.664305 containerd[1613]: time="2025-10-29T00:43:53.664266485Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 789.84994ms" Oct 29 00:43:53.664305 containerd[1613]: time="2025-10-29T00:43:53.664301842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 29 00:43:53.664861 containerd[1613]: time="2025-10-29T00:43:53.664815845Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 29 00:43:56.628910 containerd[1613]: time="2025-10-29T00:43:56.628848064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:56.629796 containerd[1613]: time="2025-10-29T00:43:56.629764883Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 29 00:43:56.631036 containerd[1613]: time="2025-10-29T00:43:56.630977728Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:56.633809 containerd[1613]: time="2025-10-29T00:43:56.633778550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:43:56.634763 containerd[1613]: time="2025-10-29T00:43:56.634706751Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.969859026s" Oct 29 00:43:56.634763 containerd[1613]: time="2025-10-29T00:43:56.634748920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 29 00:43:58.539204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 00:43:58.541268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:43:58.745049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:43:58.758835 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 00:43:58.800607 kubelet[2275]: E1029 00:43:58.800441 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:43:58.804685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:43:58.804897 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:43:58.805293 systemd[1]: kubelet.service: Consumed 212ms CPU time, 110.5M memory peak. Oct 29 00:44:00.022322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:44:00.022494 systemd[1]: kubelet.service: Consumed 212ms CPU time, 110.5M memory peak. Oct 29 00:44:00.024744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:44:00.052309 systemd[1]: Reload requested from client PID 2291 ('systemctl') (unit session-7.scope)... Oct 29 00:44:00.052323 systemd[1]: Reloading... Oct 29 00:44:00.134607 zram_generator::config[2334]: No configuration found. Oct 29 00:44:00.842141 systemd[1]: Reloading finished in 789 ms. Oct 29 00:44:00.923799 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 00:44:00.923925 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 00:44:00.924363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:44:00.924427 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.3M memory peak. Oct 29 00:44:00.926768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:44:01.112530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:44:01.129884 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 00:44:01.169812 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:44:01.169812 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:44:01.170268 kubelet[2382]: I1029 00:44:01.169870 2382 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:44:02.527226 kubelet[2382]: I1029 00:44:02.527171 2382 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 29 00:44:02.527226 kubelet[2382]: I1029 00:44:02.527198 2382 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:44:02.527226 kubelet[2382]: I1029 00:44:02.527240 2382 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 29 00:44:02.527695 kubelet[2382]: I1029 00:44:02.527251 2382 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:44:02.527695 kubelet[2382]: I1029 00:44:02.527439 2382 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 00:44:02.532224 kubelet[2382]: E1029 00:44:02.532182 2382 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 00:44:02.532499 kubelet[2382]: I1029 00:44:02.532470 2382 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:44:02.536026 kubelet[2382]: I1029 00:44:02.536006 2382 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 00:44:02.541152 kubelet[2382]: I1029 00:44:02.541114 2382 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 29 00:44:02.541887 kubelet[2382]: I1029 00:44:02.541849 2382 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:44:02.542007 kubelet[2382]: I1029 00:44:02.541874 2382 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:44:02.542007 kubelet[2382]: I1029 00:44:02.542004 2382 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:44:02.542134 kubelet[2382]: I1029 00:44:02.542012 2382 container_manager_linux.go:306] "Creating device plugin manager" Oct 29 00:44:02.542134 kubelet[2382]: I1029 00:44:02.542108 2382 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 29 00:44:02.545332 kubelet[2382]: I1029 00:44:02.545289 2382 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:44:02.545511 kubelet[2382]: I1029 00:44:02.545485 2382 kubelet.go:475] "Attempting to sync node with API server" Oct 29 00:44:02.545609 kubelet[2382]: I1029 00:44:02.545585 2382 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:44:02.545710 kubelet[2382]: I1029 00:44:02.545694 2382 kubelet.go:387] "Adding apiserver pod source" Oct 29 00:44:02.545772 kubelet[2382]: I1029 00:44:02.545750 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:44:02.546770 kubelet[2382]: E1029 00:44:02.546731 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 00:44:02.546832 kubelet[2382]: E1029 00:44:02.546755 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 00:44:02.551004 kubelet[2382]: I1029 00:44:02.550939 2382 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 00:44:02.551760 kubelet[2382]: I1029 00:44:02.551728 2382 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 00:44:02.551821 kubelet[2382]: I1029 00:44:02.551768 2382 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 29 00:44:02.551854 kubelet[2382]: W1029 00:44:02.551841 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 00:44:02.555974 kubelet[2382]: I1029 00:44:02.555940 2382 server.go:1262] "Started kubelet" Oct 29 00:44:02.556277 kubelet[2382]: I1029 00:44:02.556241 2382 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:44:02.556383 kubelet[2382]: I1029 00:44:02.556360 2382 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 29 00:44:02.556823 kubelet[2382]: I1029 00:44:02.556797 2382 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:44:02.556977 kubelet[2382]: I1029 00:44:02.556948 2382 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:44:02.557108 kubelet[2382]: I1029 00:44:02.557091 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:44:02.558981 kubelet[2382]: I1029 00:44:02.558947 2382 server.go:310] "Adding debug handlers to kubelet server" Oct 29 00:44:02.559899 kubelet[2382]: I1029 00:44:02.559866 2382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:44:02.561915 kubelet[2382]: E1029 00:44:02.561878 2382 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:44:02.563337 kubelet[2382]: I1029 00:44:02.562014 2382 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 29 00:44:02.563337 kubelet[2382]: I1029 00:44:02.562172 2382 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 29 00:44:02.563337 kubelet[2382]: I1029 00:44:02.562228 2382 reconciler.go:29] "Reconciler: start to sync state" Oct 29 00:44:02.563337 kubelet[2382]: E1029 00:44:02.562610 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 00:44:02.563337 kubelet[2382]: I1029 00:44:02.563097 2382 factory.go:223] Registration of the systemd container factory successfully Oct 29 00:44:02.563337 kubelet[2382]: I1029 00:44:02.563175 2382 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:44:02.564601 kubelet[2382]: E1029 00:44:02.562381 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872cf94ded4c805 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 00:44:02.555881477 +0000 UTC m=+1.422483024,LastTimestamp:2025-10-29 00:44:02.555881477 +0000 UTC m=+1.422483024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 00:44:02.564601 kubelet[2382]: E1029 00:44:02.563698 2382 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:44:02.564601 kubelet[2382]: E1029 00:44:02.564075 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Oct 29 00:44:02.564938 kubelet[2382]: I1029 00:44:02.564920 2382 factory.go:223] Registration of the containerd container factory successfully Oct 29 00:44:02.565482 kubelet[2382]: I1029 00:44:02.565444 2382 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 29 00:44:02.584399 kubelet[2382]: I1029 00:44:02.584134 2382 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:44:02.584399 kubelet[2382]: I1029 00:44:02.584152 2382 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:44:02.584399 kubelet[2382]: I1029 00:44:02.584166 2382 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:44:02.586488 kubelet[2382]: I1029 00:44:02.586454 2382 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 29 00:44:02.586488 kubelet[2382]: I1029 00:44:02.586481 2382 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 29 00:44:02.586588 kubelet[2382]: I1029 00:44:02.586504 2382 kubelet.go:2427] "Starting kubelet main sync loop" Oct 29 00:44:02.586918 kubelet[2382]: E1029 00:44:02.586786 2382 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:44:02.587016 kubelet[2382]: E1029 00:44:02.586973 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 00:44:02.587421 kubelet[2382]: I1029 00:44:02.587398 2382 policy_none.go:49] "None policy: Start" Oct 29 00:44:02.587653 kubelet[2382]: I1029 00:44:02.587565 2382 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 29 00:44:02.587777 kubelet[2382]: I1029 00:44:02.587676 2382 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 29 00:44:02.589101 kubelet[2382]: I1029 00:44:02.589082 2382 policy_none.go:47] "Start" Oct 29 00:44:02.593834 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 00:44:02.621133 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 00:44:02.624797 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 00:44:02.636590 kubelet[2382]: E1029 00:44:02.636544 2382 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 00:44:02.636860 kubelet[2382]: I1029 00:44:02.636844 2382 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:44:02.636915 kubelet[2382]: I1029 00:44:02.636861 2382 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:44:02.637126 kubelet[2382]: I1029 00:44:02.637096 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:44:02.638281 kubelet[2382]: E1029 00:44:02.638208 2382 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:44:02.638281 kubelet[2382]: E1029 00:44:02.638263 2382 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 00:44:02.700194 systemd[1]: Created slice kubepods-burstable-podd44373dc1e26e451a9c2a49f42451751.slice - libcontainer container kubepods-burstable-podd44373dc1e26e451a9c2a49f42451751.slice. Oct 29 00:44:02.711403 kubelet[2382]: E1029 00:44:02.711356 2382 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:44:02.714388 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 29 00:44:02.722821 kubelet[2382]: E1029 00:44:02.722789 2382 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:44:02.725721 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 29 00:44:02.727674 kubelet[2382]: E1029 00:44:02.727655 2382 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:44:02.738739 kubelet[2382]: I1029 00:44:02.738717 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:44:02.739085 kubelet[2382]: E1029 00:44:02.739054 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 29 00:44:02.764629 kubelet[2382]: E1029 00:44:02.764588 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Oct 29 00:44:02.863994 kubelet[2382]: I1029 00:44:02.863961 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d44373dc1e26e451a9c2a49f42451751-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d44373dc1e26e451a9c2a49f42451751\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:02.863994 kubelet[2382]: I1029 00:44:02.863992 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d44373dc1e26e451a9c2a49f42451751-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d44373dc1e26e451a9c2a49f42451751\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:02.864149 kubelet[2382]: I1029 00:44:02.864018 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:02.864149 kubelet[2382]: I1029 00:44:02.864031 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:02.864149 kubelet[2382]: I1029 00:44:02.864045 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:02.864149 kubelet[2382]: I1029 00:44:02.864065 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:02.864149 kubelet[2382]: I1029 00:44:02.864083 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d44373dc1e26e451a9c2a49f42451751-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d44373dc1e26e451a9c2a49f42451751\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:02.864301 kubelet[2382]: I1029 00:44:02.864098 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:02.864301 kubelet[2382]: I1029 00:44:02.864126 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:02.941369 kubelet[2382]: I1029 00:44:02.941340 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:44:02.941766 kubelet[2382]: E1029 00:44:02.941736 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 29 00:44:03.015551 kubelet[2382]: E1029 00:44:03.015476 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:03.016433 containerd[1613]: time="2025-10-29T00:44:03.016383302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d44373dc1e26e451a9c2a49f42451751,Namespace:kube-system,Attempt:0,}" Oct 29 00:44:03.026108 kubelet[2382]: E1029 00:44:03.026055 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:03.026608 containerd[1613]: time="2025-10-29T00:44:03.026554577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 29 00:44:03.031335 kubelet[2382]: E1029 00:44:03.031293 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:03.031796 containerd[1613]: time="2025-10-29T00:44:03.031763857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 29 00:44:03.165879 kubelet[2382]: E1029 00:44:03.165768 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Oct 29 00:44:03.343776 kubelet[2382]: I1029 00:44:03.343743 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:44:03.344073 kubelet[2382]: E1029 00:44:03.344046 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 29 00:44:03.542246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482084055.mount: Deactivated successfully. Oct 29 00:44:03.548200 containerd[1613]: time="2025-10-29T00:44:03.548147540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:44:03.550172 containerd[1613]: time="2025-10-29T00:44:03.550111764Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 29 00:44:03.553069 containerd[1613]: time="2025-10-29T00:44:03.553027883Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:44:03.553860 containerd[1613]: time="2025-10-29T00:44:03.553835086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:44:03.555629 containerd[1613]: time="2025-10-29T00:44:03.555530506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 29 00:44:03.556465 containerd[1613]: time="2025-10-29T00:44:03.556425575Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:44:03.557281 containerd[1613]: time="2025-10-29T00:44:03.557237437Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 29 00:44:03.558202 containerd[1613]: time="2025-10-29T00:44:03.558154607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 00:44:03.558845 containerd[1613]: time="2025-10-29T00:44:03.558807812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 539.498533ms" Oct 29 00:44:03.561052 containerd[1613]: time="2025-10-29T00:44:03.561023758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 527.032344ms" Oct 29 00:44:03.563514 containerd[1613]: time="2025-10-29T00:44:03.563470677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 534.577564ms" Oct 29 00:44:03.598373 containerd[1613]: time="2025-10-29T00:44:03.598305094Z" level=info msg="connecting to shim 8a04b2c6947c59b2703fe41e81f011d837aa0452060492c58ca2294820fc1c1e" address="unix:///run/containerd/s/41088379b903b8925206080f6f17f8e1874a8cd26ccbedf5dc1c101f4720586a" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:03.599053 containerd[1613]: time="2025-10-29T00:44:03.599013532Z" level=info msg="connecting to shim 8a45a94b665afb9f62722e1717e2acd53f04f67a380ae49a6fbf0161651ee67d" address="unix:///run/containerd/s/105e93b68ee9fd07991a4e7fad9a63d8901e45462d80b3d2dfddbe1b45786c69" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:03.604601 containerd[1613]: time="2025-10-29T00:44:03.604539235Z" level=info msg="connecting to shim 206a21e74dfb507fc88432ba55113f63ee91b524f36952afe0bd4d1261ba0752" address="unix:///run/containerd/s/7f359eeb24e59ca4b7725375ab50d9c704440c5ed658fc25936d989c21ad9abb" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:03.638763 systemd[1]: Started cri-containerd-206a21e74dfb507fc88432ba55113f63ee91b524f36952afe0bd4d1261ba0752.scope - libcontainer container 206a21e74dfb507fc88432ba55113f63ee91b524f36952afe0bd4d1261ba0752. Oct 29 00:44:03.671595 kubelet[2382]: E1029 00:44:03.671528 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 00:44:03.726524 systemd[1]: Started cri-containerd-8a04b2c6947c59b2703fe41e81f011d837aa0452060492c58ca2294820fc1c1e.scope - libcontainer container 8a04b2c6947c59b2703fe41e81f011d837aa0452060492c58ca2294820fc1c1e. Oct 29 00:44:03.729753 kubelet[2382]: E1029 00:44:03.729725 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 00:44:03.732628 systemd[1]: Started cri-containerd-8a45a94b665afb9f62722e1717e2acd53f04f67a380ae49a6fbf0161651ee67d.scope - libcontainer container 8a45a94b665afb9f62722e1717e2acd53f04f67a380ae49a6fbf0161651ee67d. Oct 29 00:44:03.736148 kubelet[2382]: E1029 00:44:03.736116 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 00:44:03.781862 containerd[1613]: time="2025-10-29T00:44:03.781815461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"206a21e74dfb507fc88432ba55113f63ee91b524f36952afe0bd4d1261ba0752\"" Oct 29 00:44:03.782684 kubelet[2382]: E1029 00:44:03.782657 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:03.791547 containerd[1613]: time="2025-10-29T00:44:03.791491016Z" level=info msg="CreateContainer within sandbox \"206a21e74dfb507fc88432ba55113f63ee91b524f36952afe0bd4d1261ba0752\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 00:44:03.796650 containerd[1613]: time="2025-10-29T00:44:03.796445427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a04b2c6947c59b2703fe41e81f011d837aa0452060492c58ca2294820fc1c1e\"" Oct 29 00:44:03.797146 kubelet[2382]: E1029 00:44:03.797123 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:03.799119 containerd[1613]: time="2025-10-29T00:44:03.799081351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d44373dc1e26e451a9c2a49f42451751,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a45a94b665afb9f62722e1717e2acd53f04f67a380ae49a6fbf0161651ee67d\"" Oct 29 00:44:03.799657 kubelet[2382]: E1029 00:44:03.799636 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:03.801234 containerd[1613]: time="2025-10-29T00:44:03.801205775Z" level=info msg="CreateContainer within sandbox \"8a04b2c6947c59b2703fe41e81f011d837aa0452060492c58ca2294820fc1c1e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 00:44:03.804434 containerd[1613]: time="2025-10-29T00:44:03.804396308Z" level=info msg="CreateContainer within sandbox \"8a45a94b665afb9f62722e1717e2acd53f04f67a380ae49a6fbf0161651ee67d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 00:44:03.807258 containerd[1613]: time="2025-10-29T00:44:03.807213261Z" level=info msg="Container 3aa30ef5e0fbb2590b945643d1d2a403e981baf4b15619cccb405e855f9274cc: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:03.811683 containerd[1613]: time="2025-10-29T00:44:03.811649741Z" level=info msg="Container 144b3213871beeac6700bafd5046899e1fa87606873662186b051290168ea954: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:03.818462 containerd[1613]: time="2025-10-29T00:44:03.818424276Z" level=info msg="CreateContainer within sandbox \"206a21e74dfb507fc88432ba55113f63ee91b524f36952afe0bd4d1261ba0752\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3aa30ef5e0fbb2590b945643d1d2a403e981baf4b15619cccb405e855f9274cc\"" Oct 29 00:44:03.818952 containerd[1613]: time="2025-10-29T00:44:03.818908373Z" level=info msg="StartContainer for \"3aa30ef5e0fbb2590b945643d1d2a403e981baf4b15619cccb405e855f9274cc\"" Oct 29 00:44:03.819921 containerd[1613]: time="2025-10-29T00:44:03.819891928Z" level=info msg="connecting to shim 3aa30ef5e0fbb2590b945643d1d2a403e981baf4b15619cccb405e855f9274cc" address="unix:///run/containerd/s/7f359eeb24e59ca4b7725375ab50d9c704440c5ed658fc25936d989c21ad9abb" protocol=ttrpc version=3 Oct 29 00:44:03.821225 containerd[1613]: time="2025-10-29T00:44:03.821200161Z" level=info msg="Container 6365469566d907f830ae5b90910734f0ce5123cb49cef9f1038f27f1e7818583: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:03.823848 containerd[1613]: time="2025-10-29T00:44:03.823814715Z" level=info msg="CreateContainer within sandbox \"8a04b2c6947c59b2703fe41e81f011d837aa0452060492c58ca2294820fc1c1e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"144b3213871beeac6700bafd5046899e1fa87606873662186b051290168ea954\"" Oct 29 00:44:03.824235 containerd[1613]: time="2025-10-29T00:44:03.824209816Z" level=info msg="StartContainer for \"144b3213871beeac6700bafd5046899e1fa87606873662186b051290168ea954\"" Oct 29 00:44:03.825278 containerd[1613]: time="2025-10-29T00:44:03.825258252Z" level=info msg="connecting to shim 144b3213871beeac6700bafd5046899e1fa87606873662186b051290168ea954" address="unix:///run/containerd/s/41088379b903b8925206080f6f17f8e1874a8cd26ccbedf5dc1c101f4720586a" protocol=ttrpc version=3 Oct 29 00:44:03.827726 containerd[1613]: time="2025-10-29T00:44:03.827688219Z" level=info msg="CreateContainer within sandbox \"8a45a94b665afb9f62722e1717e2acd53f04f67a380ae49a6fbf0161651ee67d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6365469566d907f830ae5b90910734f0ce5123cb49cef9f1038f27f1e7818583\"" Oct 29 00:44:03.828171 containerd[1613]: time="2025-10-29T00:44:03.828144845Z" level=info msg="StartContainer for \"6365469566d907f830ae5b90910734f0ce5123cb49cef9f1038f27f1e7818583\"" Oct 29 00:44:03.829661 containerd[1613]: time="2025-10-29T00:44:03.829602829Z" level=info msg="connecting to shim 6365469566d907f830ae5b90910734f0ce5123cb49cef9f1038f27f1e7818583" address="unix:///run/containerd/s/105e93b68ee9fd07991a4e7fad9a63d8901e45462d80b3d2dfddbe1b45786c69" protocol=ttrpc version=3 Oct 29 00:44:03.845767 systemd[1]: Started cri-containerd-3aa30ef5e0fbb2590b945643d1d2a403e981baf4b15619cccb405e855f9274cc.scope - libcontainer container 3aa30ef5e0fbb2590b945643d1d2a403e981baf4b15619cccb405e855f9274cc. Oct 29 00:44:03.850832 systemd[1]: Started cri-containerd-144b3213871beeac6700bafd5046899e1fa87606873662186b051290168ea954.scope - libcontainer container 144b3213871beeac6700bafd5046899e1fa87606873662186b051290168ea954. Oct 29 00:44:03.852666 systemd[1]: Started cri-containerd-6365469566d907f830ae5b90910734f0ce5123cb49cef9f1038f27f1e7818583.scope - libcontainer container 6365469566d907f830ae5b90910734f0ce5123cb49cef9f1038f27f1e7818583. Oct 29 00:44:03.909956 kubelet[2382]: E1029 00:44:03.909897 2382 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 00:44:03.913247 containerd[1613]: time="2025-10-29T00:44:03.913197589Z" level=info msg="StartContainer for \"3aa30ef5e0fbb2590b945643d1d2a403e981baf4b15619cccb405e855f9274cc\" returns successfully" Oct 29 00:44:03.931620 containerd[1613]: time="2025-10-29T00:44:03.931295990Z" level=info msg="StartContainer for \"144b3213871beeac6700bafd5046899e1fa87606873662186b051290168ea954\" returns successfully" Oct 29 00:44:03.934070 containerd[1613]: time="2025-10-29T00:44:03.934013687Z" level=info msg="StartContainer for \"6365469566d907f830ae5b90910734f0ce5123cb49cef9f1038f27f1e7818583\" returns successfully" Oct 29 00:44:03.966549 kubelet[2382]: E1029 00:44:03.966471 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="1.6s" Oct 29 00:44:04.146380 kubelet[2382]: I1029 00:44:04.146238 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:44:04.597845 kubelet[2382]: E1029 00:44:04.597800 2382 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:44:04.598671 kubelet[2382]: E1029 00:44:04.598617 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:04.600140 kubelet[2382]: E1029 00:44:04.600125 2382 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:44:04.601593 kubelet[2382]: E1029 00:44:04.600311 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:04.604196 kubelet[2382]: E1029 00:44:04.604170 2382 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:44:04.604506 kubelet[2382]: E1029 00:44:04.604492 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:05.523167 kubelet[2382]: I1029 00:44:05.523100 2382 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:44:05.548287 kubelet[2382]: I1029 00:44:05.548239 2382 apiserver.go:52] "Watching apiserver" Oct 29 00:44:05.562449 kubelet[2382]: I1029 00:44:05.562399 2382 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 29 00:44:05.564613 kubelet[2382]: I1029 00:44:05.564561 2382 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:05.568237 kubelet[2382]: E1029 00:44:05.568201 2382 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:05.568237 kubelet[2382]: I1029 00:44:05.568230 2382 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:05.569747 kubelet[2382]: E1029 00:44:05.569539 2382 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:05.569747 kubelet[2382]: I1029 00:44:05.569561 2382 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:05.570779 kubelet[2382]: E1029 00:44:05.570749 2382 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:05.604517 kubelet[2382]: I1029 00:44:05.604488 2382 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:05.604717 kubelet[2382]: I1029 00:44:05.604698 2382 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:05.605659 kubelet[2382]: E1029 00:44:05.605634 2382 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:05.605801 kubelet[2382]: E1029 00:44:05.605782 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:05.605892 kubelet[2382]: E1029 00:44:05.605870 2382 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:05.606039 kubelet[2382]: E1029 00:44:05.606020 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:06.605451 kubelet[2382]: I1029 00:44:06.605411 2382 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:06.610038 kubelet[2382]: E1029 00:44:06.610005 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:07.562795 systemd[1]: Reload requested from client PID 2672 ('systemctl') (unit session-7.scope)... Oct 29 00:44:07.562810 systemd[1]: Reloading... Oct 29 00:44:07.607702 kubelet[2382]: E1029 00:44:07.607669 2382 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:07.656612 zram_generator::config[2719]: No configuration found. Oct 29 00:44:07.895161 systemd[1]: Reloading finished in 331 ms. Oct 29 00:44:07.925535 kubelet[2382]: I1029 00:44:07.925424 2382 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:44:07.925640 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:44:07.952165 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 00:44:07.952493 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:44:07.952545 systemd[1]: kubelet.service: Consumed 1.026s CPU time, 123.7M memory peak. Oct 29 00:44:07.954420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 00:44:08.173445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 00:44:08.182840 (kubelet)[2761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 00:44:08.226047 kubelet[2761]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:44:08.226047 kubelet[2761]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:44:08.226435 kubelet[2761]: I1029 00:44:08.226096 2761 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:44:08.234131 kubelet[2761]: I1029 00:44:08.233515 2761 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 29 00:44:08.234131 kubelet[2761]: I1029 00:44:08.233541 2761 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:44:08.234131 kubelet[2761]: I1029 00:44:08.233566 2761 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 29 00:44:08.234131 kubelet[2761]: I1029 00:44:08.233595 2761 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:44:08.234131 kubelet[2761]: I1029 00:44:08.233866 2761 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 00:44:08.235870 kubelet[2761]: I1029 00:44:08.235848 2761 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 29 00:44:08.237907 kubelet[2761]: I1029 00:44:08.237859 2761 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:44:08.246615 kubelet[2761]: I1029 00:44:08.246592 2761 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 00:44:08.252652 kubelet[2761]: I1029 00:44:08.252624 2761 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 29 00:44:08.252939 kubelet[2761]: I1029 00:44:08.252899 2761 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:44:08.253086 kubelet[2761]: I1029 00:44:08.252927 2761 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:44:08.253159 kubelet[2761]: I1029 00:44:08.253090 2761 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:44:08.253159 kubelet[2761]: I1029 00:44:08.253100 2761 container_manager_linux.go:306] "Creating device plugin manager" Oct 29 00:44:08.253159 kubelet[2761]: I1029 00:44:08.253126 2761 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 29 00:44:08.253969 kubelet[2761]: I1029 00:44:08.253948 2761 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:44:08.254184 kubelet[2761]: I1029 00:44:08.254152 2761 kubelet.go:475] "Attempting to sync node with API server" Oct 29 00:44:08.254184 kubelet[2761]: I1029 00:44:08.254172 2761 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:44:08.254247 kubelet[2761]: I1029 00:44:08.254203 2761 kubelet.go:387] "Adding apiserver pod source" Oct 29 00:44:08.254280 kubelet[2761]: I1029 00:44:08.254254 2761 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:44:08.256077 kubelet[2761]: I1029 00:44:08.255897 2761 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 00:44:08.257006 kubelet[2761]: I1029 00:44:08.256963 2761 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 00:44:08.257116 kubelet[2761]: I1029 00:44:08.257102 2761 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 29 00:44:08.266165 kubelet[2761]: I1029 00:44:08.266046 2761 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:44:08.266269 kubelet[2761]: I1029 00:44:08.266172 2761 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 29 00:44:08.266419 kubelet[2761]: I1029 00:44:08.266405 2761 server.go:1262] "Started kubelet" Oct 29 00:44:08.266682 kubelet[2761]: I1029 00:44:08.266424 2761 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:44:08.268446 kubelet[2761]: I1029 00:44:08.267787 2761 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:44:08.268446 kubelet[2761]: I1029 00:44:08.268401 2761 server.go:310] "Adding debug handlers to kubelet server" Oct 29 00:44:08.269374 kubelet[2761]: I1029 00:44:08.269356 2761 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:44:08.271661 kubelet[2761]: I1029 00:44:08.271407 2761 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 29 00:44:08.271661 kubelet[2761]: I1029 00:44:08.271513 2761 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 29 00:44:08.271800 kubelet[2761]: I1029 00:44:08.271769 2761 reconciler.go:29] "Reconciler: start to sync state" Oct 29 00:44:08.272277 kubelet[2761]: I1029 00:44:08.272256 2761 factory.go:223] Registration of the systemd container factory successfully Oct 29 00:44:08.273611 kubelet[2761]: I1029 00:44:08.272338 2761 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:44:08.273611 kubelet[2761]: I1029 00:44:08.273253 2761 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:44:08.275482 kubelet[2761]: E1029 00:44:08.275444 2761 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:44:08.276631 kubelet[2761]: I1029 00:44:08.276605 2761 factory.go:223] Registration of the containerd container factory successfully Oct 29 00:44:08.282663 kubelet[2761]: I1029 00:44:08.282557 2761 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 29 00:44:08.283871 kubelet[2761]: I1029 00:44:08.283843 2761 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 29 00:44:08.283871 kubelet[2761]: I1029 00:44:08.283860 2761 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 29 00:44:08.283934 kubelet[2761]: I1029 00:44:08.283879 2761 kubelet.go:2427] "Starting kubelet main sync loop" Oct 29 00:44:08.283934 kubelet[2761]: E1029 00:44:08.283918 2761 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:44:08.315037 kubelet[2761]: I1029 00:44:08.314006 2761 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:44:08.315037 kubelet[2761]: I1029 00:44:08.314026 2761 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:44:08.315037 kubelet[2761]: I1029 00:44:08.314044 2761 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:44:08.315037 kubelet[2761]: I1029 00:44:08.314181 2761 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 00:44:08.315037 kubelet[2761]: I1029 00:44:08.314191 2761 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 00:44:08.315037 kubelet[2761]: I1029 00:44:08.314209 2761 policy_none.go:49] "None policy: Start" Oct 29 00:44:08.315037 kubelet[2761]: I1029 00:44:08.314223 2761 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 29 00:44:08.315037 kubelet[2761]: I1029 00:44:08.314239 2761 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 29 00:44:08.315037 kubelet[2761]: I1029 00:44:08.314336 2761 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 29 00:44:08.315037 kubelet[2761]: I1029 00:44:08.314344 2761 policy_none.go:47] "Start" Oct 29 00:44:08.319106 kubelet[2761]: E1029 00:44:08.319076 2761 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 00:44:08.319308 kubelet[2761]: I1029 00:44:08.319285 2761 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:44:08.319366 kubelet[2761]: I1029 00:44:08.319314 2761 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:44:08.319602 kubelet[2761]: I1029 00:44:08.319587 2761 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:44:08.320187 kubelet[2761]: E1029 00:44:08.320171 2761 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:44:08.385185 kubelet[2761]: I1029 00:44:08.385105 2761 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:08.385559 kubelet[2761]: I1029 00:44:08.385537 2761 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:08.385653 kubelet[2761]: I1029 00:44:08.385565 2761 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:08.391375 kubelet[2761]: E1029 00:44:08.391322 2761 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:08.426972 kubelet[2761]: I1029 00:44:08.425782 2761 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:44:08.431921 kubelet[2761]: I1029 00:44:08.431869 2761 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 00:44:08.431992 kubelet[2761]: I1029 00:44:08.431947 2761 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:44:08.473251 kubelet[2761]: I1029 00:44:08.473203 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d44373dc1e26e451a9c2a49f42451751-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d44373dc1e26e451a9c2a49f42451751\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:08.473251 kubelet[2761]: I1029 00:44:08.473241 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d44373dc1e26e451a9c2a49f42451751-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d44373dc1e26e451a9c2a49f42451751\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:08.473251 kubelet[2761]: I1029 00:44:08.473273 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:08.473454 kubelet[2761]: I1029 00:44:08.473288 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:08.473454 kubelet[2761]: I1029 00:44:08.473302 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:08.473454 kubelet[2761]: I1029 00:44:08.473316 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d44373dc1e26e451a9c2a49f42451751-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d44373dc1e26e451a9c2a49f42451751\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:08.473454 kubelet[2761]: I1029 00:44:08.473406 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:08.473454 kubelet[2761]: I1029 00:44:08.473445 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:08.473618 kubelet[2761]: I1029 00:44:08.473466 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:08.692192 kubelet[2761]: E1029 00:44:08.691783 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:08.692192 kubelet[2761]: E1029 00:44:08.691823 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:08.692192 kubelet[2761]: E1029 00:44:08.691997 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:09.255979 kubelet[2761]: I1029 00:44:09.255927 2761 apiserver.go:52] "Watching apiserver" Oct 29 00:44:09.273470 kubelet[2761]: I1029 00:44:09.272101 2761 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 29 00:44:09.298549 kubelet[2761]: I1029 00:44:09.298504 2761 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:09.298926 kubelet[2761]: I1029 00:44:09.298893 2761 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:09.299199 kubelet[2761]: I1029 00:44:09.299177 2761 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:09.300108 kubelet[2761]: I1029 00:44:09.300056 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.300048058 podStartE2EDuration="1.300048058s" podCreationTimestamp="2025-10-29 00:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:44:09.297779305 +0000 UTC m=+1.111011924" watchObservedRunningTime="2025-10-29 00:44:09.300048058 +0000 UTC m=+1.113280677" Oct 29 00:44:09.309614 kubelet[2761]: E1029 00:44:09.309421 2761 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 00:44:09.309614 kubelet[2761]: E1029 00:44:09.309458 2761 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 00:44:09.309753 kubelet[2761]: E1029 00:44:09.309694 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:09.309893 kubelet[2761]: E1029 00:44:09.309860 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:09.310304 kubelet[2761]: E1029 00:44:09.310262 2761 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:44:09.310504 kubelet[2761]: E1029 00:44:09.310481 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:09.318598 kubelet[2761]: I1029 00:44:09.318251 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.318234139 podStartE2EDuration="1.318234139s" podCreationTimestamp="2025-10-29 00:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:44:09.318020357 +0000 UTC m=+1.131252976" watchObservedRunningTime="2025-10-29 00:44:09.318234139 +0000 UTC m=+1.131466748" Oct 29 00:44:09.318598 kubelet[2761]: I1029 00:44:09.318502 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.318487679 podStartE2EDuration="3.318487679s" podCreationTimestamp="2025-10-29 00:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:44:09.310216059 +0000 UTC m=+1.123448678" watchObservedRunningTime="2025-10-29 00:44:09.318487679 +0000 UTC m=+1.131720298" Oct 29 00:44:10.300608 kubelet[2761]: E1029 00:44:10.300371 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:10.300608 kubelet[2761]: E1029 00:44:10.300379 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:10.300608 kubelet[2761]: E1029 00:44:10.300482 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:11.301709 kubelet[2761]: E1029 00:44:11.301658 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:11.302156 kubelet[2761]: E1029 00:44:11.301978 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:13.471358 kubelet[2761]: I1029 00:44:13.471310 2761 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 00:44:13.471874 containerd[1613]: time="2025-10-29T00:44:13.471703271Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 00:44:13.472164 kubelet[2761]: I1029 00:44:13.471981 2761 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 00:44:13.791934 kubelet[2761]: E1029 00:44:13.791894 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:14.306499 kubelet[2761]: E1029 00:44:14.306445 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:14.335761 systemd[1]: Created slice kubepods-besteffort-podfdb0508b_9a13_4a2d_9436_dbcc711367cc.slice - libcontainer container kubepods-besteffort-podfdb0508b_9a13_4a2d_9436_dbcc711367cc.slice. Oct 29 00:44:14.410245 kubelet[2761]: I1029 00:44:14.410210 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdb0508b-9a13-4a2d-9436-dbcc711367cc-kube-proxy\") pod \"kube-proxy-c9tzm\" (UID: \"fdb0508b-9a13-4a2d-9436-dbcc711367cc\") " pod="kube-system/kube-proxy-c9tzm" Oct 29 00:44:14.410540 kubelet[2761]: I1029 00:44:14.410443 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdb0508b-9a13-4a2d-9436-dbcc711367cc-xtables-lock\") pod \"kube-proxy-c9tzm\" (UID: \"fdb0508b-9a13-4a2d-9436-dbcc711367cc\") " pod="kube-system/kube-proxy-c9tzm" Oct 29 00:44:14.410540 kubelet[2761]: I1029 00:44:14.410467 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdb0508b-9a13-4a2d-9436-dbcc711367cc-lib-modules\") pod \"kube-proxy-c9tzm\" (UID: \"fdb0508b-9a13-4a2d-9436-dbcc711367cc\") " pod="kube-system/kube-proxy-c9tzm" Oct 29 00:44:14.410540 kubelet[2761]: I1029 00:44:14.410487 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpwlj\" (UniqueName: \"kubernetes.io/projected/fdb0508b-9a13-4a2d-9436-dbcc711367cc-kube-api-access-bpwlj\") pod \"kube-proxy-c9tzm\" (UID: \"fdb0508b-9a13-4a2d-9436-dbcc711367cc\") " pod="kube-system/kube-proxy-c9tzm" Oct 29 00:44:14.647319 kubelet[2761]: E1029 00:44:14.647156 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:14.648098 containerd[1613]: time="2025-10-29T00:44:14.647957475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9tzm,Uid:fdb0508b-9a13-4a2d-9436-dbcc711367cc,Namespace:kube-system,Attempt:0,}" Oct 29 00:44:14.684835 systemd[1]: Created slice kubepods-besteffort-pod0aef5743_329f_447d_aec5_14b05798246c.slice - libcontainer container kubepods-besteffort-pod0aef5743_329f_447d_aec5_14b05798246c.slice. Oct 29 00:44:14.689608 containerd[1613]: time="2025-10-29T00:44:14.689527014Z" level=info msg="connecting to shim a345ffa21681d00fc2fbef34a357a63f485dbb39b636d2c30adfd71a0d8f15c1" address="unix:///run/containerd/s/2303f65aaffa3db665f59ba15763a2ca18fb09053cf11b82a41b391e1c78bddd" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:14.711922 kubelet[2761]: I1029 00:44:14.711856 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0aef5743-329f-447d-aec5-14b05798246c-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-tq7pm\" (UID: \"0aef5743-329f-447d-aec5-14b05798246c\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-tq7pm" Oct 29 00:44:14.712144 kubelet[2761]: I1029 00:44:14.712127 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srswt\" (UniqueName: \"kubernetes.io/projected/0aef5743-329f-447d-aec5-14b05798246c-kube-api-access-srswt\") pod \"tigera-operator-65cdcdfd6d-tq7pm\" (UID: \"0aef5743-329f-447d-aec5-14b05798246c\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-tq7pm" Oct 29 00:44:14.735820 systemd[1]: Started cri-containerd-a345ffa21681d00fc2fbef34a357a63f485dbb39b636d2c30adfd71a0d8f15c1.scope - libcontainer container a345ffa21681d00fc2fbef34a357a63f485dbb39b636d2c30adfd71a0d8f15c1. Oct 29 00:44:14.760519 containerd[1613]: time="2025-10-29T00:44:14.760483551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9tzm,Uid:fdb0508b-9a13-4a2d-9436-dbcc711367cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a345ffa21681d00fc2fbef34a357a63f485dbb39b636d2c30adfd71a0d8f15c1\"" Oct 29 00:44:14.761309 kubelet[2761]: E1029 00:44:14.761284 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:14.774137 containerd[1613]: time="2025-10-29T00:44:14.774094454Z" level=info msg="CreateContainer within sandbox \"a345ffa21681d00fc2fbef34a357a63f485dbb39b636d2c30adfd71a0d8f15c1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 00:44:14.784550 containerd[1613]: time="2025-10-29T00:44:14.784425755Z" level=info msg="Container c7bb4b0332584d8651fd19db6badebd175dc054d2cba25c573fd434463cb1377: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:14.787967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount109666115.mount: Deactivated successfully. Oct 29 00:44:14.793623 containerd[1613]: time="2025-10-29T00:44:14.793591175Z" level=info msg="CreateContainer within sandbox \"a345ffa21681d00fc2fbef34a357a63f485dbb39b636d2c30adfd71a0d8f15c1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7bb4b0332584d8651fd19db6badebd175dc054d2cba25c573fd434463cb1377\"" Oct 29 00:44:14.796205 containerd[1613]: time="2025-10-29T00:44:14.796166221Z" level=info msg="StartContainer for \"c7bb4b0332584d8651fd19db6badebd175dc054d2cba25c573fd434463cb1377\"" Oct 29 00:44:14.798107 containerd[1613]: time="2025-10-29T00:44:14.798070343Z" level=info msg="connecting to shim c7bb4b0332584d8651fd19db6badebd175dc054d2cba25c573fd434463cb1377" address="unix:///run/containerd/s/2303f65aaffa3db665f59ba15763a2ca18fb09053cf11b82a41b391e1c78bddd" protocol=ttrpc version=3 Oct 29 00:44:14.821731 systemd[1]: Started cri-containerd-c7bb4b0332584d8651fd19db6badebd175dc054d2cba25c573fd434463cb1377.scope - libcontainer container c7bb4b0332584d8651fd19db6badebd175dc054d2cba25c573fd434463cb1377. Oct 29 00:44:14.870147 containerd[1613]: time="2025-10-29T00:44:14.870091496Z" level=info msg="StartContainer for \"c7bb4b0332584d8651fd19db6badebd175dc054d2cba25c573fd434463cb1377\" returns successfully" Oct 29 00:44:14.991892 containerd[1613]: time="2025-10-29T00:44:14.991777833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-tq7pm,Uid:0aef5743-329f-447d-aec5-14b05798246c,Namespace:tigera-operator,Attempt:0,}" Oct 29 00:44:15.031370 containerd[1613]: time="2025-10-29T00:44:15.031308232Z" level=info msg="connecting to shim 63c374c38468fdb18d92e6103a7547687e6e692fca480e762a70eef99b74e25f" address="unix:///run/containerd/s/4fae495b03a76d97274855b8ca54ab8e6a6e9052391f413300f3275a3e16d455" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:15.057712 systemd[1]: Started cri-containerd-63c374c38468fdb18d92e6103a7547687e6e692fca480e762a70eef99b74e25f.scope - libcontainer container 63c374c38468fdb18d92e6103a7547687e6e692fca480e762a70eef99b74e25f. Oct 29 00:44:15.123876 containerd[1613]: time="2025-10-29T00:44:15.123833017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-tq7pm,Uid:0aef5743-329f-447d-aec5-14b05798246c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"63c374c38468fdb18d92e6103a7547687e6e692fca480e762a70eef99b74e25f\"" Oct 29 00:44:15.126455 containerd[1613]: time="2025-10-29T00:44:15.125872913Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 00:44:15.311222 kubelet[2761]: E1029 00:44:15.311095 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:15.390382 kubelet[2761]: E1029 00:44:15.390343 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:15.404290 kubelet[2761]: I1029 00:44:15.404230 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c9tzm" podStartSLOduration=1.404210073 podStartE2EDuration="1.404210073s" podCreationTimestamp="2025-10-29 00:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:44:15.321130813 +0000 UTC m=+7.134363433" watchObservedRunningTime="2025-10-29 00:44:15.404210073 +0000 UTC m=+7.217442692" Oct 29 00:44:16.312807 kubelet[2761]: E1029 00:44:16.312774 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:17.314528 kubelet[2761]: E1029 00:44:17.314483 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:18.202682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3981660200.mount: Deactivated successfully. Oct 29 00:44:19.125064 containerd[1613]: time="2025-10-29T00:44:19.124994601Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:19.125858 containerd[1613]: time="2025-10-29T00:44:19.125826524Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 29 00:44:19.127223 containerd[1613]: time="2025-10-29T00:44:19.127178786Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:19.129317 containerd[1613]: time="2025-10-29T00:44:19.129271828Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:19.129806 containerd[1613]: time="2025-10-29T00:44:19.129774134Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.003843339s" Oct 29 00:44:19.129806 containerd[1613]: time="2025-10-29T00:44:19.129802518Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 29 00:44:19.133827 containerd[1613]: time="2025-10-29T00:44:19.133787769Z" level=info msg="CreateContainer within sandbox \"63c374c38468fdb18d92e6103a7547687e6e692fca480e762a70eef99b74e25f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 00:44:19.142829 containerd[1613]: time="2025-10-29T00:44:19.142788162Z" level=info msg="Container c24db6ef686bd561037c1d78fbea98fc026b95787e3f55c707628c27b78fa2e6: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:19.146341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211725775.mount: Deactivated successfully. Oct 29 00:44:19.149689 containerd[1613]: time="2025-10-29T00:44:19.149651028Z" level=info msg="CreateContainer within sandbox \"63c374c38468fdb18d92e6103a7547687e6e692fca480e762a70eef99b74e25f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c24db6ef686bd561037c1d78fbea98fc026b95787e3f55c707628c27b78fa2e6\"" Oct 29 00:44:19.150115 containerd[1613]: time="2025-10-29T00:44:19.150070626Z" level=info msg="StartContainer for \"c24db6ef686bd561037c1d78fbea98fc026b95787e3f55c707628c27b78fa2e6\"" Oct 29 00:44:19.150992 containerd[1613]: time="2025-10-29T00:44:19.150953625Z" level=info msg="connecting to shim c24db6ef686bd561037c1d78fbea98fc026b95787e3f55c707628c27b78fa2e6" address="unix:///run/containerd/s/4fae495b03a76d97274855b8ca54ab8e6a6e9052391f413300f3275a3e16d455" protocol=ttrpc version=3 Oct 29 00:44:19.187715 systemd[1]: Started cri-containerd-c24db6ef686bd561037c1d78fbea98fc026b95787e3f55c707628c27b78fa2e6.scope - libcontainer container c24db6ef686bd561037c1d78fbea98fc026b95787e3f55c707628c27b78fa2e6. Oct 29 00:44:19.308336 containerd[1613]: time="2025-10-29T00:44:19.308287736Z" level=info msg="StartContainer for \"c24db6ef686bd561037c1d78fbea98fc026b95787e3f55c707628c27b78fa2e6\" returns successfully" Oct 29 00:44:19.715107 kubelet[2761]: E1029 00:44:19.715061 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:19.723954 kubelet[2761]: I1029 00:44:19.723780 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-tq7pm" podStartSLOduration=1.718883547 podStartE2EDuration="5.723740647s" podCreationTimestamp="2025-10-29 00:44:14 +0000 UTC" firstStartedPulling="2025-10-29 00:44:15.125608138 +0000 UTC m=+6.938840757" lastFinishedPulling="2025-10-29 00:44:19.130465238 +0000 UTC m=+10.943697857" observedRunningTime="2025-10-29 00:44:19.326257381 +0000 UTC m=+11.139489990" watchObservedRunningTime="2025-10-29 00:44:19.723740647 +0000 UTC m=+11.536973267" Oct 29 00:44:21.385524 update_engine[1581]: I20251029 00:44:21.384794 1581 update_attempter.cc:509] Updating boot flags... Oct 29 00:44:24.665630 sudo[1815]: pam_unix(sudo:session): session closed for user root Oct 29 00:44:24.667710 sshd[1814]: Connection closed by 10.0.0.1 port 34482 Oct 29 00:44:24.668528 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Oct 29 00:44:24.678776 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:34482.service: Deactivated successfully. Oct 29 00:44:24.684963 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 00:44:24.685480 systemd[1]: session-7.scope: Consumed 6.471s CPU time, 218.9M memory peak. Oct 29 00:44:24.688619 systemd-logind[1578]: Session 7 logged out. Waiting for processes to exit. Oct 29 00:44:24.690789 systemd-logind[1578]: Removed session 7. Oct 29 00:44:28.876166 systemd[1]: Created slice kubepods-besteffort-pod2eb1f44e_7dc6_4071_a037_21ae35c355c0.slice - libcontainer container kubepods-besteffort-pod2eb1f44e_7dc6_4071_a037_21ae35c355c0.slice. Oct 29 00:44:28.913429 kubelet[2761]: I1029 00:44:28.913365 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2eb1f44e-7dc6-4071-a037-21ae35c355c0-typha-certs\") pod \"calico-typha-748f4f67d5-mr4n7\" (UID: \"2eb1f44e-7dc6-4071-a037-21ae35c355c0\") " pod="calico-system/calico-typha-748f4f67d5-mr4n7" Oct 29 00:44:28.913429 kubelet[2761]: I1029 00:44:28.913426 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb1f44e-7dc6-4071-a037-21ae35c355c0-tigera-ca-bundle\") pod \"calico-typha-748f4f67d5-mr4n7\" (UID: \"2eb1f44e-7dc6-4071-a037-21ae35c355c0\") " pod="calico-system/calico-typha-748f4f67d5-mr4n7" Oct 29 00:44:28.913429 kubelet[2761]: I1029 00:44:28.913445 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knv5p\" (UniqueName: \"kubernetes.io/projected/2eb1f44e-7dc6-4071-a037-21ae35c355c0-kube-api-access-knv5p\") pod \"calico-typha-748f4f67d5-mr4n7\" (UID: \"2eb1f44e-7dc6-4071-a037-21ae35c355c0\") " pod="calico-system/calico-typha-748f4f67d5-mr4n7" Oct 29 00:44:28.976215 systemd[1]: Created slice kubepods-besteffort-pod397eb4cf_280e_4fc5_abe8_3d7c5eb8d84c.slice - libcontainer container kubepods-besteffort-pod397eb4cf_280e_4fc5_abe8_3d7c5eb8d84c.slice. Oct 29 00:44:29.014087 kubelet[2761]: I1029 00:44:29.014033 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-policysync\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014087 kubelet[2761]: I1029 00:44:29.014081 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-tigera-ca-bundle\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014272 kubelet[2761]: I1029 00:44:29.014211 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-cni-log-dir\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014272 kubelet[2761]: I1029 00:44:29.014232 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-lib-modules\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014272 kubelet[2761]: I1029 00:44:29.014245 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-var-lib-calico\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014272 kubelet[2761]: I1029 00:44:29.014259 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-flexvol-driver-host\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014272 kubelet[2761]: I1029 00:44:29.014275 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-xtables-lock\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014396 kubelet[2761]: I1029 00:44:29.014333 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-cni-net-dir\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014468 kubelet[2761]: I1029 00:44:29.014447 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4gsw\" (UniqueName: \"kubernetes.io/projected/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-kube-api-access-f4gsw\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014517 kubelet[2761]: I1029 00:44:29.014473 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-cni-bin-dir\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014544 kubelet[2761]: I1029 00:44:29.014505 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-node-certs\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.014569 kubelet[2761]: I1029 00:44:29.014553 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c-var-run-calico\") pod \"calico-node-b8529\" (UID: \"397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c\") " pod="calico-system/calico-node-b8529" Oct 29 00:44:29.116366 kubelet[2761]: E1029 00:44:29.116280 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.116366 kubelet[2761]: W1029 00:44:29.116309 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.116366 kubelet[2761]: E1029 00:44:29.116330 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.120821 kubelet[2761]: E1029 00:44:29.120789 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.120821 kubelet[2761]: W1029 00:44:29.120804 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.120821 kubelet[2761]: E1029 00:44:29.120814 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.123682 kubelet[2761]: E1029 00:44:29.123651 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.123682 kubelet[2761]: W1029 00:44:29.123672 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.123834 kubelet[2761]: E1029 00:44:29.123692 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.186123 kubelet[2761]: E1029 00:44:29.185392 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:29.186767 containerd[1613]: time="2025-10-29T00:44:29.186724105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-748f4f67d5-mr4n7,Uid:2eb1f44e-7dc6-4071-a037-21ae35c355c0,Namespace:calico-system,Attempt:0,}" Oct 29 00:44:29.189488 kubelet[2761]: E1029 00:44:29.189445 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nsdz2" podUID="7c6565b5-13e1-473b-b977-3ab4cab19c9a" Oct 29 00:44:29.198276 kubelet[2761]: E1029 00:44:29.198228 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.198276 kubelet[2761]: W1029 00:44:29.198258 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.198276 kubelet[2761]: E1029 00:44:29.198276 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.198824 kubelet[2761]: E1029 00:44:29.198801 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.198824 kubelet[2761]: W1029 00:44:29.198815 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.199622 kubelet[2761]: E1029 00:44:29.199599 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.200210 kubelet[2761]: E1029 00:44:29.200187 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.200210 kubelet[2761]: W1029 00:44:29.200204 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.200282 kubelet[2761]: E1029 00:44:29.200216 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.201680 kubelet[2761]: E1029 00:44:29.200664 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.201680 kubelet[2761]: W1029 00:44:29.201673 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.201736 kubelet[2761]: E1029 00:44:29.201691 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.202058 kubelet[2761]: E1029 00:44:29.202023 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.202058 kubelet[2761]: W1029 00:44:29.202041 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.202058 kubelet[2761]: E1029 00:44:29.202052 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.202314 kubelet[2761]: E1029 00:44:29.202287 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.202360 kubelet[2761]: W1029 00:44:29.202323 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.202360 kubelet[2761]: E1029 00:44:29.202332 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.202590 kubelet[2761]: E1029 00:44:29.202540 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.202747 kubelet[2761]: W1029 00:44:29.202568 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.202747 kubelet[2761]: E1029 00:44:29.202742 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.203708 kubelet[2761]: E1029 00:44:29.203685 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.203708 kubelet[2761]: W1029 00:44:29.203700 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.203708 kubelet[2761]: E1029 00:44:29.203709 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.204205 kubelet[2761]: E1029 00:44:29.204182 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.204205 kubelet[2761]: W1029 00:44:29.204198 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.204205 kubelet[2761]: E1029 00:44:29.204207 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.204883 kubelet[2761]: E1029 00:44:29.204834 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.204883 kubelet[2761]: W1029 00:44:29.204868 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.204955 kubelet[2761]: E1029 00:44:29.204894 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.205252 kubelet[2761]: E1029 00:44:29.205230 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.205252 kubelet[2761]: W1029 00:44:29.205245 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.205252 kubelet[2761]: E1029 00:44:29.205255 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.205685 kubelet[2761]: E1029 00:44:29.205399 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.205685 kubelet[2761]: W1029 00:44:29.205409 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.205685 kubelet[2761]: E1029 00:44:29.205417 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.205685 kubelet[2761]: E1029 00:44:29.205622 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.205685 kubelet[2761]: W1029 00:44:29.205630 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.205685 kubelet[2761]: E1029 00:44:29.205639 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.208747 kubelet[2761]: E1029 00:44:29.208713 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.208747 kubelet[2761]: W1029 00:44:29.208730 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.208747 kubelet[2761]: E1029 00:44:29.208740 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.208983 kubelet[2761]: E1029 00:44:29.208958 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.208983 kubelet[2761]: W1029 00:44:29.208978 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.209053 kubelet[2761]: E1029 00:44:29.208992 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.212069 kubelet[2761]: E1029 00:44:29.211297 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.212069 kubelet[2761]: W1029 00:44:29.211315 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.212069 kubelet[2761]: E1029 00:44:29.211326 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.212069 kubelet[2761]: E1029 00:44:29.211519 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.212069 kubelet[2761]: W1029 00:44:29.211529 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.212069 kubelet[2761]: E1029 00:44:29.211538 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.212069 kubelet[2761]: E1029 00:44:29.211736 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.212069 kubelet[2761]: W1029 00:44:29.211744 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.212069 kubelet[2761]: E1029 00:44:29.211752 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.212069 kubelet[2761]: E1029 00:44:29.211917 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.212329 kubelet[2761]: W1029 00:44:29.211925 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.212329 kubelet[2761]: E1029 00:44:29.211933 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.213255 kubelet[2761]: E1029 00:44:29.213227 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.213255 kubelet[2761]: W1029 00:44:29.213243 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.213255 kubelet[2761]: E1029 00:44:29.213252 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.216759 kubelet[2761]: E1029 00:44:29.216074 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.216759 kubelet[2761]: W1029 00:44:29.216096 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.216759 kubelet[2761]: E1029 00:44:29.216125 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.216759 kubelet[2761]: I1029 00:44:29.216153 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c6565b5-13e1-473b-b977-3ab4cab19c9a-registration-dir\") pod \"csi-node-driver-nsdz2\" (UID: \"7c6565b5-13e1-473b-b977-3ab4cab19c9a\") " pod="calico-system/csi-node-driver-nsdz2" Oct 29 00:44:29.216759 kubelet[2761]: E1029 00:44:29.216337 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.216759 kubelet[2761]: W1029 00:44:29.216349 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.216759 kubelet[2761]: E1029 00:44:29.216358 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.216759 kubelet[2761]: I1029 00:44:29.216379 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7c6565b5-13e1-473b-b977-3ab4cab19c9a-varrun\") pod \"csi-node-driver-nsdz2\" (UID: \"7c6565b5-13e1-473b-b977-3ab4cab19c9a\") " pod="calico-system/csi-node-driver-nsdz2" Oct 29 00:44:29.216759 kubelet[2761]: E1029 00:44:29.216644 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.216973 kubelet[2761]: W1029 00:44:29.216657 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.216973 kubelet[2761]: E1029 00:44:29.216667 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.216973 kubelet[2761]: E1029 00:44:29.216872 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.216973 kubelet[2761]: W1029 00:44:29.216880 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.216973 kubelet[2761]: E1029 00:44:29.216889 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.217104 kubelet[2761]: E1029 00:44:29.217083 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.217104 kubelet[2761]: W1029 00:44:29.217098 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.217163 kubelet[2761]: E1029 00:44:29.217105 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.217163 kubelet[2761]: I1029 00:44:29.217148 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c6565b5-13e1-473b-b977-3ab4cab19c9a-kubelet-dir\") pod \"csi-node-driver-nsdz2\" (UID: \"7c6565b5-13e1-473b-b977-3ab4cab19c9a\") " pod="calico-system/csi-node-driver-nsdz2" Oct 29 00:44:29.217413 kubelet[2761]: E1029 00:44:29.217393 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.217413 kubelet[2761]: W1029 00:44:29.217406 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.217413 kubelet[2761]: E1029 00:44:29.217415 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.217499 kubelet[2761]: I1029 00:44:29.217443 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c6565b5-13e1-473b-b977-3ab4cab19c9a-socket-dir\") pod \"csi-node-driver-nsdz2\" (UID: \"7c6565b5-13e1-473b-b977-3ab4cab19c9a\") " pod="calico-system/csi-node-driver-nsdz2" Oct 29 00:44:29.217701 kubelet[2761]: E1029 00:44:29.217678 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.217701 kubelet[2761]: W1029 00:44:29.217693 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.217701 kubelet[2761]: E1029 00:44:29.217702 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.217813 kubelet[2761]: I1029 00:44:29.217748 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4rd8\" (UniqueName: \"kubernetes.io/projected/7c6565b5-13e1-473b-b977-3ab4cab19c9a-kube-api-access-r4rd8\") pod \"csi-node-driver-nsdz2\" (UID: \"7c6565b5-13e1-473b-b977-3ab4cab19c9a\") " pod="calico-system/csi-node-driver-nsdz2" Oct 29 00:44:29.218001 kubelet[2761]: E1029 00:44:29.217979 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.218001 kubelet[2761]: W1029 00:44:29.217992 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.218001 kubelet[2761]: E1029 00:44:29.218001 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.218267 kubelet[2761]: E1029 00:44:29.218244 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.218267 kubelet[2761]: W1029 00:44:29.218263 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.218331 kubelet[2761]: E1029 00:44:29.218275 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.218587 kubelet[2761]: E1029 00:44:29.218539 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.218587 kubelet[2761]: W1029 00:44:29.218554 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.218587 kubelet[2761]: E1029 00:44:29.218564 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.220384 kubelet[2761]: E1029 00:44:29.219348 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.220384 kubelet[2761]: W1029 00:44:29.219364 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.220384 kubelet[2761]: E1029 00:44:29.219376 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.220384 kubelet[2761]: E1029 00:44:29.219738 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.220384 kubelet[2761]: W1029 00:44:29.219747 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.220384 kubelet[2761]: E1029 00:44:29.219756 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.220384 kubelet[2761]: E1029 00:44:29.219943 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.220384 kubelet[2761]: W1029 00:44:29.219950 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.220384 kubelet[2761]: E1029 00:44:29.219959 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.220384 kubelet[2761]: E1029 00:44:29.220173 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.220657 kubelet[2761]: W1029 00:44:29.220182 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.220657 kubelet[2761]: E1029 00:44:29.220190 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.220657 kubelet[2761]: E1029 00:44:29.220431 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.220657 kubelet[2761]: W1029 00:44:29.220441 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.220657 kubelet[2761]: E1029 00:44:29.220449 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.228911 containerd[1613]: time="2025-10-29T00:44:29.228858839Z" level=info msg="connecting to shim 3c0ddc658c19a563a6e7ec7b8e6bdb4cee3101859c0a22023164f5967ba2cf56" address="unix:///run/containerd/s/9c74e5c2d999c8217a4b53ea7570b3931542804a0b6bc9263864db814fa9b60b" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:29.264725 systemd[1]: Started cri-containerd-3c0ddc658c19a563a6e7ec7b8e6bdb4cee3101859c0a22023164f5967ba2cf56.scope - libcontainer container 3c0ddc658c19a563a6e7ec7b8e6bdb4cee3101859c0a22023164f5967ba2cf56. Oct 29 00:44:29.282997 kubelet[2761]: E1029 00:44:29.282935 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:29.284077 containerd[1613]: time="2025-10-29T00:44:29.284023776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b8529,Uid:397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c,Namespace:calico-system,Attempt:0,}" Oct 29 00:44:29.311604 containerd[1613]: time="2025-10-29T00:44:29.311534006Z" level=info msg="connecting to shim 1dcb476ba973f09a9efbe6f2968a0a92deaead7d4864351c711c29ca664362ed" address="unix:///run/containerd/s/88f3c086a046be3e3a0b11bdeb3791c0a7045c2854978c34dc957dc1a24e3af5" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:29.319164 kubelet[2761]: E1029 00:44:29.318908 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.319164 kubelet[2761]: W1029 00:44:29.318932 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.319164 kubelet[2761]: E1029 00:44:29.319144 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.319801 kubelet[2761]: E1029 00:44:29.319765 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.319801 kubelet[2761]: W1029 00:44:29.319779 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.319801 kubelet[2761]: E1029 00:44:29.319789 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.320424 kubelet[2761]: E1029 00:44:29.320338 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.320424 kubelet[2761]: W1029 00:44:29.320352 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.320424 kubelet[2761]: E1029 00:44:29.320362 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.320809 kubelet[2761]: E1029 00:44:29.320783 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.320809 kubelet[2761]: W1029 00:44:29.320801 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.320809 kubelet[2761]: E1029 00:44:29.320812 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.321591 kubelet[2761]: E1029 00:44:29.321302 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.321591 kubelet[2761]: W1029 00:44:29.321422 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.321591 kubelet[2761]: E1029 00:44:29.321434 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.322593 kubelet[2761]: E1029 00:44:29.321741 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.322593 kubelet[2761]: W1029 00:44:29.321789 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.322593 kubelet[2761]: E1029 00:44:29.321799 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.322593 kubelet[2761]: E1029 00:44:29.322220 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.322593 kubelet[2761]: W1029 00:44:29.322230 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.322593 kubelet[2761]: E1029 00:44:29.322239 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.322593 kubelet[2761]: E1029 00:44:29.322545 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.322593 kubelet[2761]: W1029 00:44:29.322554 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.322593 kubelet[2761]: E1029 00:44:29.322563 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.322980 kubelet[2761]: E1029 00:44:29.322955 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.322980 kubelet[2761]: W1029 00:44:29.322972 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.322980 kubelet[2761]: E1029 00:44:29.322980 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.323312 kubelet[2761]: E1029 00:44:29.323285 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.323312 kubelet[2761]: W1029 00:44:29.323302 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.323312 kubelet[2761]: E1029 00:44:29.323310 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.323898 kubelet[2761]: E1029 00:44:29.323820 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.323898 kubelet[2761]: W1029 00:44:29.323838 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.323898 kubelet[2761]: E1029 00:44:29.323850 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.324343 kubelet[2761]: E1029 00:44:29.324321 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.324343 kubelet[2761]: W1029 00:44:29.324337 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.324399 kubelet[2761]: E1029 00:44:29.324347 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.324780 kubelet[2761]: E1029 00:44:29.324760 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.324780 kubelet[2761]: W1029 00:44:29.324775 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.324839 kubelet[2761]: E1029 00:44:29.324788 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.325133 kubelet[2761]: E1029 00:44:29.325093 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.325133 kubelet[2761]: W1029 00:44:29.325119 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.325133 kubelet[2761]: E1029 00:44:29.325130 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.325601 kubelet[2761]: E1029 00:44:29.325470 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.325601 kubelet[2761]: W1029 00:44:29.325500 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.325601 kubelet[2761]: E1029 00:44:29.325512 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.326010 kubelet[2761]: E1029 00:44:29.325857 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.326010 kubelet[2761]: W1029 00:44:29.325870 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.326010 kubelet[2761]: E1029 00:44:29.325899 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.326597 kubelet[2761]: E1029 00:44:29.326170 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.326597 kubelet[2761]: W1029 00:44:29.326185 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.326597 kubelet[2761]: E1029 00:44:29.326196 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.326597 kubelet[2761]: E1029 00:44:29.326521 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.326597 kubelet[2761]: W1029 00:44:29.326532 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.326597 kubelet[2761]: E1029 00:44:29.326541 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.326750 kubelet[2761]: E1029 00:44:29.326731 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.326750 kubelet[2761]: W1029 00:44:29.326739 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.326750 kubelet[2761]: E1029 00:44:29.326747 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.327370 kubelet[2761]: E1029 00:44:29.327080 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.327370 kubelet[2761]: W1029 00:44:29.327093 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.327370 kubelet[2761]: E1029 00:44:29.327101 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.327370 kubelet[2761]: E1029 00:44:29.327274 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.327370 kubelet[2761]: W1029 00:44:29.327281 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.327370 kubelet[2761]: E1029 00:44:29.327317 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.327696 kubelet[2761]: E1029 00:44:29.327530 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.327696 kubelet[2761]: W1029 00:44:29.327538 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.327696 kubelet[2761]: E1029 00:44:29.327546 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.327773 kubelet[2761]: E1029 00:44:29.327756 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.327773 kubelet[2761]: W1029 00:44:29.327765 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.327773 kubelet[2761]: E1029 00:44:29.327772 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.328621 kubelet[2761]: E1029 00:44:29.327980 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.328621 kubelet[2761]: W1029 00:44:29.327992 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.328621 kubelet[2761]: E1029 00:44:29.328002 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.328621 kubelet[2761]: E1029 00:44:29.328518 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.328621 kubelet[2761]: W1029 00:44:29.328528 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.328621 kubelet[2761]: E1029 00:44:29.328537 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.339960 kubelet[2761]: E1029 00:44:29.339918 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:29.339960 kubelet[2761]: W1029 00:44:29.339936 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:29.339960 kubelet[2761]: E1029 00:44:29.339952 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:29.344034 containerd[1613]: time="2025-10-29T00:44:29.343964926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-748f4f67d5-mr4n7,Uid:2eb1f44e-7dc6-4071-a037-21ae35c355c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c0ddc658c19a563a6e7ec7b8e6bdb4cee3101859c0a22023164f5967ba2cf56\"" Oct 29 00:44:29.344796 kubelet[2761]: E1029 00:44:29.344766 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:29.346020 containerd[1613]: time="2025-10-29T00:44:29.345969894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 00:44:29.351747 systemd[1]: Started cri-containerd-1dcb476ba973f09a9efbe6f2968a0a92deaead7d4864351c711c29ca664362ed.scope - libcontainer container 1dcb476ba973f09a9efbe6f2968a0a92deaead7d4864351c711c29ca664362ed. Oct 29 00:44:29.379441 containerd[1613]: time="2025-10-29T00:44:29.379394551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b8529,Uid:397eb4cf-280e-4fc5-abe8-3d7c5eb8d84c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1dcb476ba973f09a9efbe6f2968a0a92deaead7d4864351c711c29ca664362ed\"" Oct 29 00:44:29.380214 kubelet[2761]: E1029 00:44:29.380194 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:30.906656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115496448.mount: Deactivated successfully. Oct 29 00:44:31.226010 containerd[1613]: time="2025-10-29T00:44:31.225889309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:31.226666 containerd[1613]: time="2025-10-29T00:44:31.226645917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 29 00:44:31.227853 containerd[1613]: time="2025-10-29T00:44:31.227828470Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:31.229999 containerd[1613]: time="2025-10-29T00:44:31.229951648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:31.230419 containerd[1613]: time="2025-10-29T00:44:31.230380428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.884374234s" Oct 29 00:44:31.230452 containerd[1613]: time="2025-10-29T00:44:31.230418459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 29 00:44:31.231532 containerd[1613]: time="2025-10-29T00:44:31.231396014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 00:44:31.247099 containerd[1613]: time="2025-10-29T00:44:31.247054064Z" level=info msg="CreateContainer within sandbox \"3c0ddc658c19a563a6e7ec7b8e6bdb4cee3101859c0a22023164f5967ba2cf56\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 00:44:31.254598 containerd[1613]: time="2025-10-29T00:44:31.254508517Z" level=info msg="Container 0e67a6c8d0a65d72c113ebb764a7e889ab67fc3dee3a834d38fb04ded7f5a6d9: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:31.260920 containerd[1613]: time="2025-10-29T00:44:31.260878954Z" level=info msg="CreateContainer within sandbox \"3c0ddc658c19a563a6e7ec7b8e6bdb4cee3101859c0a22023164f5967ba2cf56\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0e67a6c8d0a65d72c113ebb764a7e889ab67fc3dee3a834d38fb04ded7f5a6d9\"" Oct 29 00:44:31.264453 containerd[1613]: time="2025-10-29T00:44:31.264422403Z" level=info msg="StartContainer for \"0e67a6c8d0a65d72c113ebb764a7e889ab67fc3dee3a834d38fb04ded7f5a6d9\"" Oct 29 00:44:31.265466 containerd[1613]: time="2025-10-29T00:44:31.265436507Z" level=info msg="connecting to shim 0e67a6c8d0a65d72c113ebb764a7e889ab67fc3dee3a834d38fb04ded7f5a6d9" address="unix:///run/containerd/s/9c74e5c2d999c8217a4b53ea7570b3931542804a0b6bc9263864db814fa9b60b" protocol=ttrpc version=3 Oct 29 00:44:31.284200 kubelet[2761]: E1029 00:44:31.284150 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nsdz2" podUID="7c6565b5-13e1-473b-b977-3ab4cab19c9a" Oct 29 00:44:31.285734 systemd[1]: Started cri-containerd-0e67a6c8d0a65d72c113ebb764a7e889ab67fc3dee3a834d38fb04ded7f5a6d9.scope - libcontainer container 0e67a6c8d0a65d72c113ebb764a7e889ab67fc3dee3a834d38fb04ded7f5a6d9. Oct 29 00:44:31.333676 containerd[1613]: time="2025-10-29T00:44:31.333553510Z" level=info msg="StartContainer for \"0e67a6c8d0a65d72c113ebb764a7e889ab67fc3dee3a834d38fb04ded7f5a6d9\" returns successfully" Oct 29 00:44:31.346345 kubelet[2761]: E1029 00:44:31.346290 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:31.360386 kubelet[2761]: I1029 00:44:31.360325 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-748f4f67d5-mr4n7" podStartSLOduration=1.474644829 podStartE2EDuration="3.36030951s" podCreationTimestamp="2025-10-29 00:44:28 +0000 UTC" firstStartedPulling="2025-10-29 00:44:29.345528039 +0000 UTC m=+21.158760659" lastFinishedPulling="2025-10-29 00:44:31.231192721 +0000 UTC m=+23.044425340" observedRunningTime="2025-10-29 00:44:31.358030798 +0000 UTC m=+23.171263437" watchObservedRunningTime="2025-10-29 00:44:31.36030951 +0000 UTC m=+23.173542129" Oct 29 00:44:31.426748 kubelet[2761]: E1029 00:44:31.426696 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.426748 kubelet[2761]: W1029 00:44:31.426736 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.426928 kubelet[2761]: E1029 00:44:31.426766 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.427154 kubelet[2761]: E1029 00:44:31.427011 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.427154 kubelet[2761]: W1029 00:44:31.427032 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.427154 kubelet[2761]: E1029 00:44:31.427048 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.427365 kubelet[2761]: E1029 00:44:31.427333 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.427365 kubelet[2761]: W1029 00:44:31.427349 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.427422 kubelet[2761]: E1029 00:44:31.427366 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.427775 kubelet[2761]: E1029 00:44:31.427699 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.427775 kubelet[2761]: W1029 00:44:31.427721 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.427775 kubelet[2761]: E1029 00:44:31.427738 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.428442 kubelet[2761]: E1029 00:44:31.428038 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.428442 kubelet[2761]: W1029 00:44:31.428053 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.428442 kubelet[2761]: E1029 00:44:31.428068 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.428442 kubelet[2761]: E1029 00:44:31.428327 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.428442 kubelet[2761]: W1029 00:44:31.428341 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.428442 kubelet[2761]: E1029 00:44:31.428357 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.428930 kubelet[2761]: E1029 00:44:31.428622 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.428930 kubelet[2761]: W1029 00:44:31.428636 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.428930 kubelet[2761]: E1029 00:44:31.428652 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.428930 kubelet[2761]: E1029 00:44:31.428891 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.428930 kubelet[2761]: W1029 00:44:31.428899 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.428930 kubelet[2761]: E1029 00:44:31.428908 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.429125 kubelet[2761]: E1029 00:44:31.429102 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.429125 kubelet[2761]: W1029 00:44:31.429109 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.429125 kubelet[2761]: E1029 00:44:31.429117 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.429292 kubelet[2761]: E1029 00:44:31.429274 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.429292 kubelet[2761]: W1029 00:44:31.429285 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.429292 kubelet[2761]: E1029 00:44:31.429292 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.429482 kubelet[2761]: E1029 00:44:31.429464 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.429482 kubelet[2761]: W1029 00:44:31.429474 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.429482 kubelet[2761]: E1029 00:44:31.429482 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.429671 kubelet[2761]: E1029 00:44:31.429652 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.429671 kubelet[2761]: W1029 00:44:31.429663 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.429671 kubelet[2761]: E1029 00:44:31.429671 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.429864 kubelet[2761]: E1029 00:44:31.429847 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.429864 kubelet[2761]: W1029 00:44:31.429857 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.429929 kubelet[2761]: E1029 00:44:31.429867 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.430048 kubelet[2761]: E1029 00:44:31.430025 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.430048 kubelet[2761]: W1029 00:44:31.430037 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.430048 kubelet[2761]: E1029 00:44:31.430045 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.430248 kubelet[2761]: E1029 00:44:31.430218 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.430248 kubelet[2761]: W1029 00:44:31.430231 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.430248 kubelet[2761]: E1029 00:44:31.430240 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.436906 kubelet[2761]: E1029 00:44:31.436875 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.436906 kubelet[2761]: W1029 00:44:31.436901 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.437124 kubelet[2761]: E1029 00:44:31.436990 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.437609 kubelet[2761]: E1029 00:44:31.437559 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.437609 kubelet[2761]: W1029 00:44:31.437584 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.437609 kubelet[2761]: E1029 00:44:31.437594 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.438350 kubelet[2761]: E1029 00:44:31.437866 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.438350 kubelet[2761]: W1029 00:44:31.437876 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.438350 kubelet[2761]: E1029 00:44:31.437885 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.438350 kubelet[2761]: E1029 00:44:31.438102 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.438350 kubelet[2761]: W1029 00:44:31.438117 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.438350 kubelet[2761]: E1029 00:44:31.438131 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.438680 kubelet[2761]: E1029 00:44:31.438656 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.438728 kubelet[2761]: W1029 00:44:31.438673 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.438728 kubelet[2761]: E1029 00:44:31.438719 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.439183 kubelet[2761]: E1029 00:44:31.439152 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.439183 kubelet[2761]: W1029 00:44:31.439179 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.439251 kubelet[2761]: E1029 00:44:31.439191 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.440046 kubelet[2761]: E1029 00:44:31.440008 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.440046 kubelet[2761]: W1029 00:44:31.440022 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.440046 kubelet[2761]: E1029 00:44:31.440032 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.441531 kubelet[2761]: E1029 00:44:31.441482 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.441531 kubelet[2761]: W1029 00:44:31.441505 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.441531 kubelet[2761]: E1029 00:44:31.441516 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.443304 kubelet[2761]: E1029 00:44:31.443276 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.443304 kubelet[2761]: W1029 00:44:31.443293 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.443304 kubelet[2761]: E1029 00:44:31.443303 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.444600 kubelet[2761]: E1029 00:44:31.443805 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.444600 kubelet[2761]: W1029 00:44:31.443820 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.444600 kubelet[2761]: E1029 00:44:31.443829 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.444600 kubelet[2761]: E1029 00:44:31.444068 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.444600 kubelet[2761]: W1029 00:44:31.444085 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.444600 kubelet[2761]: E1029 00:44:31.444311 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.444945 kubelet[2761]: E1029 00:44:31.444919 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.444945 kubelet[2761]: W1029 00:44:31.444932 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.444945 kubelet[2761]: E1029 00:44:31.444942 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.445800 kubelet[2761]: E1029 00:44:31.445779 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.445800 kubelet[2761]: W1029 00:44:31.445792 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.445800 kubelet[2761]: E1029 00:44:31.445802 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.446278 kubelet[2761]: E1029 00:44:31.446261 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.446278 kubelet[2761]: W1029 00:44:31.446274 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.446365 kubelet[2761]: E1029 00:44:31.446284 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.446808 kubelet[2761]: E1029 00:44:31.446789 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.446808 kubelet[2761]: W1029 00:44:31.446802 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.446808 kubelet[2761]: E1029 00:44:31.446812 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.447629 kubelet[2761]: E1029 00:44:31.447538 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.447629 kubelet[2761]: W1029 00:44:31.447555 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.447629 kubelet[2761]: E1029 00:44:31.447567 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.448252 kubelet[2761]: E1029 00:44:31.448056 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.448252 kubelet[2761]: W1029 00:44:31.448070 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.448252 kubelet[2761]: E1029 00:44:31.448090 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:31.448345 kubelet[2761]: E1029 00:44:31.448276 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:31.448345 kubelet[2761]: W1029 00:44:31.448284 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:31.448345 kubelet[2761]: E1029 00:44:31.448293 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.347590 kubelet[2761]: I1029 00:44:32.347545 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:44:32.348011 kubelet[2761]: E1029 00:44:32.347933 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:32.435764 kubelet[2761]: E1029 00:44:32.435670 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.435764 kubelet[2761]: W1029 00:44:32.435698 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.435764 kubelet[2761]: E1029 00:44:32.435718 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.436232 kubelet[2761]: E1029 00:44:32.436173 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.436232 kubelet[2761]: W1029 00:44:32.436185 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.436232 kubelet[2761]: E1029 00:44:32.436194 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.437625 kubelet[2761]: E1029 00:44:32.437611 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.437740 kubelet[2761]: W1029 00:44:32.437685 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.437740 kubelet[2761]: E1029 00:44:32.437699 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.439624 kubelet[2761]: E1029 00:44:32.439610 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.439741 kubelet[2761]: W1029 00:44:32.439681 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.439741 kubelet[2761]: E1029 00:44:32.439694 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.440027 kubelet[2761]: E1029 00:44:32.440014 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.440146 kubelet[2761]: W1029 00:44:32.440090 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.440146 kubelet[2761]: E1029 00:44:32.440104 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.441712 kubelet[2761]: E1029 00:44:32.441653 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.441712 kubelet[2761]: W1029 00:44:32.441665 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.441712 kubelet[2761]: E1029 00:44:32.441674 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.442187 kubelet[2761]: E1029 00:44:32.442148 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.442187 kubelet[2761]: W1029 00:44:32.442182 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.442258 kubelet[2761]: E1029 00:44:32.442210 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.443734 kubelet[2761]: E1029 00:44:32.443709 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.443734 kubelet[2761]: W1029 00:44:32.443726 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.443734 kubelet[2761]: E1029 00:44:32.443736 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.444207 kubelet[2761]: E1029 00:44:32.444185 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.444252 kubelet[2761]: W1029 00:44:32.444212 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.444252 kubelet[2761]: E1029 00:44:32.444224 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.446166 kubelet[2761]: E1029 00:44:32.446141 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.446166 kubelet[2761]: W1029 00:44:32.446158 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.446166 kubelet[2761]: E1029 00:44:32.446168 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.447599 kubelet[2761]: E1029 00:44:32.446419 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.447599 kubelet[2761]: W1029 00:44:32.446433 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.447599 kubelet[2761]: E1029 00:44:32.446443 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.447599 kubelet[2761]: E1029 00:44:32.446627 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.447599 kubelet[2761]: W1029 00:44:32.446635 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.447599 kubelet[2761]: E1029 00:44:32.446643 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.447599 kubelet[2761]: E1029 00:44:32.446798 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.447599 kubelet[2761]: W1029 00:44:32.446805 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.447599 kubelet[2761]: E1029 00:44:32.446812 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.447599 kubelet[2761]: E1029 00:44:32.446955 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.447846 kubelet[2761]: W1029 00:44:32.446962 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.447846 kubelet[2761]: E1029 00:44:32.446970 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.448776 kubelet[2761]: E1029 00:44:32.448751 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.448776 kubelet[2761]: W1029 00:44:32.448769 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.448776 kubelet[2761]: E1029 00:44:32.448779 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.449101 kubelet[2761]: E1029 00:44:32.449078 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.449101 kubelet[2761]: W1029 00:44:32.449093 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.449101 kubelet[2761]: E1029 00:44:32.449102 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.449330 kubelet[2761]: E1029 00:44:32.449308 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.449330 kubelet[2761]: W1029 00:44:32.449322 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.449330 kubelet[2761]: E1029 00:44:32.449330 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.450602 kubelet[2761]: E1029 00:44:32.449550 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.450602 kubelet[2761]: W1029 00:44:32.449563 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.450602 kubelet[2761]: E1029 00:44:32.449594 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.450602 kubelet[2761]: E1029 00:44:32.449844 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.450602 kubelet[2761]: W1029 00:44:32.449862 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.450602 kubelet[2761]: E1029 00:44:32.449871 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.450602 kubelet[2761]: E1029 00:44:32.450103 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.450602 kubelet[2761]: W1029 00:44:32.450112 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.450602 kubelet[2761]: E1029 00:44:32.450121 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.450602 kubelet[2761]: E1029 00:44:32.450407 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.450876 kubelet[2761]: W1029 00:44:32.450415 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.450876 kubelet[2761]: E1029 00:44:32.450424 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.450876 kubelet[2761]: E1029 00:44:32.450836 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.450876 kubelet[2761]: W1029 00:44:32.450846 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.450876 kubelet[2761]: E1029 00:44:32.450855 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.452593 kubelet[2761]: E1029 00:44:32.451107 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.452593 kubelet[2761]: W1029 00:44:32.451121 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.452593 kubelet[2761]: E1029 00:44:32.451129 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.452593 kubelet[2761]: E1029 00:44:32.451339 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.452593 kubelet[2761]: W1029 00:44:32.451346 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.452593 kubelet[2761]: E1029 00:44:32.451354 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.452593 kubelet[2761]: E1029 00:44:32.451867 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.452593 kubelet[2761]: W1029 00:44:32.451876 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.452593 kubelet[2761]: E1029 00:44:32.451885 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.452593 kubelet[2761]: E1029 00:44:32.452115 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.452900 kubelet[2761]: W1029 00:44:32.452123 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.452900 kubelet[2761]: E1029 00:44:32.452130 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.452900 kubelet[2761]: E1029 00:44:32.452352 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.452900 kubelet[2761]: W1029 00:44:32.452360 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.452900 kubelet[2761]: E1029 00:44:32.452368 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.452900 kubelet[2761]: E1029 00:44:32.452750 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.452900 kubelet[2761]: W1029 00:44:32.452760 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.452900 kubelet[2761]: E1029 00:44:32.452769 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.453075 kubelet[2761]: E1029 00:44:32.453057 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.453098 kubelet[2761]: W1029 00:44:32.453075 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.453098 kubelet[2761]: E1029 00:44:32.453085 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.453699 kubelet[2761]: E1029 00:44:32.453672 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.454084 kubelet[2761]: W1029 00:44:32.453965 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.454084 kubelet[2761]: E1029 00:44:32.453981 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.454802 kubelet[2761]: E1029 00:44:32.454704 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.454802 kubelet[2761]: W1029 00:44:32.454716 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.454802 kubelet[2761]: E1029 00:44:32.454725 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.455117 kubelet[2761]: E1029 00:44:32.455105 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.455326 kubelet[2761]: W1029 00:44:32.455160 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.455326 kubelet[2761]: E1029 00:44:32.455172 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.455695 kubelet[2761]: E1029 00:44:32.455565 2761 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 00:44:32.455695 kubelet[2761]: W1029 00:44:32.455639 2761 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 00:44:32.455883 kubelet[2761]: E1029 00:44:32.455648 2761 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 00:44:32.517937 containerd[1613]: time="2025-10-29T00:44:32.517887811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:32.518678 containerd[1613]: time="2025-10-29T00:44:32.518648126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 29 00:44:32.519790 containerd[1613]: time="2025-10-29T00:44:32.519761056Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:32.521796 containerd[1613]: time="2025-10-29T00:44:32.521743918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:32.522232 containerd[1613]: time="2025-10-29T00:44:32.522194168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.29075946s" Oct 29 00:44:32.522258 containerd[1613]: time="2025-10-29T00:44:32.522231939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 29 00:44:32.525590 containerd[1613]: time="2025-10-29T00:44:32.525544350Z" level=info msg="CreateContainer within sandbox \"1dcb476ba973f09a9efbe6f2968a0a92deaead7d4864351c711c29ca664362ed\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 00:44:32.534783 containerd[1613]: time="2025-10-29T00:44:32.534738409Z" level=info msg="Container d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:32.544190 containerd[1613]: time="2025-10-29T00:44:32.544138367Z" level=info msg="CreateContainer within sandbox \"1dcb476ba973f09a9efbe6f2968a0a92deaead7d4864351c711c29ca664362ed\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c\"" Oct 29 00:44:32.544832 containerd[1613]: time="2025-10-29T00:44:32.544791710Z" level=info msg="StartContainer for \"d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c\"" Oct 29 00:44:32.546251 containerd[1613]: time="2025-10-29T00:44:32.546211480Z" level=info msg="connecting to shim d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c" address="unix:///run/containerd/s/88f3c086a046be3e3a0b11bdeb3791c0a7045c2854978c34dc957dc1a24e3af5" protocol=ttrpc version=3 Oct 29 00:44:32.570741 systemd[1]: Started cri-containerd-d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c.scope - libcontainer container d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c. Oct 29 00:44:32.618888 containerd[1613]: time="2025-10-29T00:44:32.618703174Z" level=info msg="StartContainer for \"d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c\" returns successfully" Oct 29 00:44:32.627786 systemd[1]: cri-containerd-d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c.scope: Deactivated successfully. Oct 29 00:44:32.630546 containerd[1613]: time="2025-10-29T00:44:32.630502610Z" level=info msg="received exit event container_id:\"d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c\" id:\"d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c\" pid:3508 exited_at:{seconds:1761698672 nanos:630160474}" Oct 29 00:44:32.630842 containerd[1613]: time="2025-10-29T00:44:32.630801114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c\" id:\"d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c\" pid:3508 exited_at:{seconds:1761698672 nanos:630160474}" Oct 29 00:44:32.652086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d67fa92a2e817d858814e14ec1af8f96ecdbec377f00f1de15a26ee470a6dc0c-rootfs.mount: Deactivated successfully. Oct 29 00:44:33.284657 kubelet[2761]: E1029 00:44:33.284561 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nsdz2" podUID="7c6565b5-13e1-473b-b977-3ab4cab19c9a" Oct 29 00:44:33.352161 kubelet[2761]: E1029 00:44:33.352112 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:33.355028 containerd[1613]: time="2025-10-29T00:44:33.354999684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 00:44:35.285789 kubelet[2761]: E1029 00:44:35.285734 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nsdz2" podUID="7c6565b5-13e1-473b-b977-3ab4cab19c9a" Oct 29 00:44:35.870797 containerd[1613]: time="2025-10-29T00:44:35.870737498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:35.871645 containerd[1613]: time="2025-10-29T00:44:35.871600956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 29 00:44:35.872904 containerd[1613]: time="2025-10-29T00:44:35.872859078Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:35.875097 containerd[1613]: time="2025-10-29T00:44:35.875053905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:35.875930 containerd[1613]: time="2025-10-29T00:44:35.875858021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.520659773s" Oct 29 00:44:35.875930 containerd[1613]: time="2025-10-29T00:44:35.875899949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 29 00:44:35.879489 containerd[1613]: time="2025-10-29T00:44:35.879458097Z" level=info msg="CreateContainer within sandbox \"1dcb476ba973f09a9efbe6f2968a0a92deaead7d4864351c711c29ca664362ed\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 00:44:35.889473 containerd[1613]: time="2025-10-29T00:44:35.889419644Z" level=info msg="Container a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:35.897306 containerd[1613]: time="2025-10-29T00:44:35.897265383Z" level=info msg="CreateContainer within sandbox \"1dcb476ba973f09a9efbe6f2968a0a92deaead7d4864351c711c29ca664362ed\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548\"" Oct 29 00:44:35.897923 containerd[1613]: time="2025-10-29T00:44:35.897859804Z" level=info msg="StartContainer for \"a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548\"" Oct 29 00:44:35.899310 containerd[1613]: time="2025-10-29T00:44:35.899271204Z" level=info msg="connecting to shim a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548" address="unix:///run/containerd/s/88f3c086a046be3e3a0b11bdeb3791c0a7045c2854978c34dc957dc1a24e3af5" protocol=ttrpc version=3 Oct 29 00:44:35.927850 systemd[1]: Started cri-containerd-a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548.scope - libcontainer container a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548. Oct 29 00:44:35.975888 containerd[1613]: time="2025-10-29T00:44:35.975823554Z" level=info msg="StartContainer for \"a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548\" returns successfully" Oct 29 00:44:36.488190 kubelet[2761]: E1029 00:44:36.488157 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:37.122732 systemd[1]: cri-containerd-a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548.scope: Deactivated successfully. Oct 29 00:44:37.123159 systemd[1]: cri-containerd-a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548.scope: Consumed 556ms CPU time, 179M memory peak, 4.3M read from disk, 171.3M written to disk. Oct 29 00:44:37.124146 containerd[1613]: time="2025-10-29T00:44:37.124098425Z" level=info msg="received exit event container_id:\"a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548\" id:\"a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548\" pid:3567 exited_at:{seconds:1761698677 nanos:122425474}" Oct 29 00:44:37.125818 containerd[1613]: time="2025-10-29T00:44:37.125790834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548\" id:\"a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548\" pid:3567 exited_at:{seconds:1761698677 nanos:122425474}" Oct 29 00:44:37.130553 containerd[1613]: time="2025-10-29T00:44:37.130517187Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 00:44:37.146003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a70499e2ad6bb5e1899720bf0d3fe102e4aa6b1cc7bc92a764bd64f23a25d548-rootfs.mount: Deactivated successfully. Oct 29 00:44:37.217188 kubelet[2761]: I1029 00:44:37.217147 2761 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 29 00:44:37.290354 systemd[1]: Created slice kubepods-besteffort-pod7c6565b5_13e1_473b_b977_3ab4cab19c9a.slice - libcontainer container kubepods-besteffort-pod7c6565b5_13e1_473b_b977_3ab4cab19c9a.slice. Oct 29 00:44:37.449989 containerd[1613]: time="2025-10-29T00:44:37.449829994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nsdz2,Uid:7c6565b5-13e1-473b-b977-3ab4cab19c9a,Namespace:calico-system,Attempt:0,}" Oct 29 00:44:37.470463 systemd[1]: Created slice kubepods-besteffort-pod13741636_ed02_4263_9179_b37fa6d45218.slice - libcontainer container kubepods-besteffort-pod13741636_ed02_4263_9179_b37fa6d45218.slice. Oct 29 00:44:37.482354 systemd[1]: Created slice kubepods-besteffort-pod6bbe49bd_5153_4d3e_aaf8_870450eb6a27.slice - libcontainer container kubepods-besteffort-pod6bbe49bd_5153_4d3e_aaf8_870450eb6a27.slice. Oct 29 00:44:37.489644 systemd[1]: Created slice kubepods-burstable-podf8a0a8b3_e8df_4562_92d8_5506f107f647.slice - libcontainer container kubepods-burstable-podf8a0a8b3_e8df_4562_92d8_5506f107f647.slice. Oct 29 00:44:37.500675 systemd[1]: Created slice kubepods-besteffort-poda2ccb739_fac3_4c48_b3e6_829701cb5ed8.slice - libcontainer container kubepods-besteffort-poda2ccb739_fac3_4c48_b3e6_829701cb5ed8.slice. Oct 29 00:44:37.509817 kubelet[2761]: E1029 00:44:37.509013 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:37.514050 containerd[1613]: time="2025-10-29T00:44:37.511289471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 00:44:37.513886 systemd[1]: Created slice kubepods-burstable-pod934dc817_c3aa_48d4_be1a_4f1c33bc437d.slice - libcontainer container kubepods-burstable-pod934dc817_c3aa_48d4_be1a_4f1c33bc437d.slice. Oct 29 00:44:37.516410 kubelet[2761]: I1029 00:44:37.516358 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqczh\" (UniqueName: \"kubernetes.io/projected/934dc817-c3aa-48d4-be1a-4f1c33bc437d-kube-api-access-zqczh\") pod \"coredns-66bc5c9577-zmpf8\" (UID: \"934dc817-c3aa-48d4-be1a-4f1c33bc437d\") " pod="kube-system/coredns-66bc5c9577-zmpf8" Oct 29 00:44:37.516501 kubelet[2761]: I1029 00:44:37.516415 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/225130ab-318f-4752-a3f6-b2cc6751e084-goldmane-key-pair\") pod \"goldmane-7c778bb748-jzb5c\" (UID: \"225130ab-318f-4752-a3f6-b2cc6751e084\") " pod="calico-system/goldmane-7c778bb748-jzb5c" Oct 29 00:44:37.516501 kubelet[2761]: I1029 00:44:37.516437 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/934dc817-c3aa-48d4-be1a-4f1c33bc437d-config-volume\") pod \"coredns-66bc5c9577-zmpf8\" (UID: \"934dc817-c3aa-48d4-be1a-4f1c33bc437d\") " pod="kube-system/coredns-66bc5c9577-zmpf8" Oct 29 00:44:37.516501 kubelet[2761]: I1029 00:44:37.516459 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6mhr\" (UniqueName: \"kubernetes.io/projected/fc5d07a9-e55f-4ffc-bf62-2522098f23ea-kube-api-access-b6mhr\") pod \"calico-apiserver-6d776986cb-zqjs2\" (UID: \"fc5d07a9-e55f-4ffc-bf62-2522098f23ea\") " pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" Oct 29 00:44:37.516501 kubelet[2761]: I1029 00:44:37.516476 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/225130ab-318f-4752-a3f6-b2cc6751e084-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-jzb5c\" (UID: \"225130ab-318f-4752-a3f6-b2cc6751e084\") " pod="calico-system/goldmane-7c778bb748-jzb5c" Oct 29 00:44:37.516501 kubelet[2761]: I1029 00:44:37.516495 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rd9\" (UniqueName: \"kubernetes.io/projected/225130ab-318f-4752-a3f6-b2cc6751e084-kube-api-access-x6rd9\") pod \"goldmane-7c778bb748-jzb5c\" (UID: \"225130ab-318f-4752-a3f6-b2cc6751e084\") " pod="calico-system/goldmane-7c778bb748-jzb5c" Oct 29 00:44:37.516659 kubelet[2761]: I1029 00:44:37.516515 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm5jg\" (UniqueName: \"kubernetes.io/projected/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-kube-api-access-hm5jg\") pod \"whisker-7b8bcbc6f-p7j9q\" (UID: \"6bbe49bd-5153-4d3e-aaf8-870450eb6a27\") " pod="calico-system/whisker-7b8bcbc6f-p7j9q" Oct 29 00:44:37.516659 kubelet[2761]: I1029 00:44:37.516535 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-whisker-backend-key-pair\") pod \"whisker-7b8bcbc6f-p7j9q\" (UID: \"6bbe49bd-5153-4d3e-aaf8-870450eb6a27\") " pod="calico-system/whisker-7b8bcbc6f-p7j9q" Oct 29 00:44:37.516659 kubelet[2761]: I1029 00:44:37.516553 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fc5d07a9-e55f-4ffc-bf62-2522098f23ea-calico-apiserver-certs\") pod \"calico-apiserver-6d776986cb-zqjs2\" (UID: \"fc5d07a9-e55f-4ffc-bf62-2522098f23ea\") " pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" Oct 29 00:44:37.516659 kubelet[2761]: I1029 00:44:37.516597 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/225130ab-318f-4752-a3f6-b2cc6751e084-config\") pod \"goldmane-7c778bb748-jzb5c\" (UID: \"225130ab-318f-4752-a3f6-b2cc6751e084\") " pod="calico-system/goldmane-7c778bb748-jzb5c" Oct 29 00:44:37.516659 kubelet[2761]: I1029 00:44:37.516617 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2xng\" (UniqueName: \"kubernetes.io/projected/13741636-ed02-4263-9179-b37fa6d45218-kube-api-access-f2xng\") pod \"calico-kube-controllers-557bfcb88d-x5jf7\" (UID: \"13741636-ed02-4263-9179-b37fa6d45218\") " pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" Oct 29 00:44:37.516772 kubelet[2761]: I1029 00:44:37.516637 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-whisker-ca-bundle\") pod \"whisker-7b8bcbc6f-p7j9q\" (UID: \"6bbe49bd-5153-4d3e-aaf8-870450eb6a27\") " pod="calico-system/whisker-7b8bcbc6f-p7j9q" Oct 29 00:44:37.516772 kubelet[2761]: I1029 00:44:37.516663 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a0a8b3-e8df-4562-92d8-5506f107f647-config-volume\") pod \"coredns-66bc5c9577-j799m\" (UID: \"f8a0a8b3-e8df-4562-92d8-5506f107f647\") " pod="kube-system/coredns-66bc5c9577-j799m" Oct 29 00:44:37.516772 kubelet[2761]: I1029 00:44:37.516694 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9prx\" (UniqueName: \"kubernetes.io/projected/f8a0a8b3-e8df-4562-92d8-5506f107f647-kube-api-access-x9prx\") pod \"coredns-66bc5c9577-j799m\" (UID: \"f8a0a8b3-e8df-4562-92d8-5506f107f647\") " pod="kube-system/coredns-66bc5c9577-j799m" Oct 29 00:44:37.516772 kubelet[2761]: I1029 00:44:37.516713 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpbng\" (UniqueName: \"kubernetes.io/projected/a2ccb739-fac3-4c48-b3e6-829701cb5ed8-kube-api-access-dpbng\") pod \"calico-apiserver-6d776986cb-mkzq4\" (UID: \"a2ccb739-fac3-4c48-b3e6-829701cb5ed8\") " pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" Oct 29 00:44:37.516772 kubelet[2761]: I1029 00:44:37.516735 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13741636-ed02-4263-9179-b37fa6d45218-tigera-ca-bundle\") pod \"calico-kube-controllers-557bfcb88d-x5jf7\" (UID: \"13741636-ed02-4263-9179-b37fa6d45218\") " pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" Oct 29 00:44:37.516894 kubelet[2761]: I1029 00:44:37.516753 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a2ccb739-fac3-4c48-b3e6-829701cb5ed8-calico-apiserver-certs\") pod \"calico-apiserver-6d776986cb-mkzq4\" (UID: \"a2ccb739-fac3-4c48-b3e6-829701cb5ed8\") " pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" Oct 29 00:44:37.531614 systemd[1]: Created slice kubepods-besteffort-podfc5d07a9_e55f_4ffc_bf62_2522098f23ea.slice - libcontainer container kubepods-besteffort-podfc5d07a9_e55f_4ffc_bf62_2522098f23ea.slice. Oct 29 00:44:37.538222 systemd[1]: Created slice kubepods-besteffort-pod225130ab_318f_4752_a3f6_b2cc6751e084.slice - libcontainer container kubepods-besteffort-pod225130ab_318f_4752_a3f6_b2cc6751e084.slice. Oct 29 00:44:37.572648 containerd[1613]: time="2025-10-29T00:44:37.572597855Z" level=error msg="Failed to destroy network for sandbox \"33cd808e60f28d510d4b545bd5b8f3f8d3ef19bf88c7264a8c8953d9f4b15b79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.574849 systemd[1]: run-netns-cni\x2db34e1349\x2d13c6\x2ddd7e\x2da3aa\x2d6bda75b6b3e5.mount: Deactivated successfully. Oct 29 00:44:37.575154 containerd[1613]: time="2025-10-29T00:44:37.575093016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nsdz2,Uid:7c6565b5-13e1-473b-b977-3ab4cab19c9a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cd808e60f28d510d4b545bd5b8f3f8d3ef19bf88c7264a8c8953d9f4b15b79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.575430 kubelet[2761]: E1029 00:44:37.575379 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cd808e60f28d510d4b545bd5b8f3f8d3ef19bf88c7264a8c8953d9f4b15b79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.575550 kubelet[2761]: E1029 00:44:37.575446 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cd808e60f28d510d4b545bd5b8f3f8d3ef19bf88c7264a8c8953d9f4b15b79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nsdz2" Oct 29 00:44:37.575550 kubelet[2761]: E1029 00:44:37.575465 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cd808e60f28d510d4b545bd5b8f3f8d3ef19bf88c7264a8c8953d9f4b15b79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nsdz2" Oct 29 00:44:37.575550 kubelet[2761]: E1029 00:44:37.575522 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nsdz2_calico-system(7c6565b5-13e1-473b-b977-3ab4cab19c9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nsdz2_calico-system(7c6565b5-13e1-473b-b977-3ab4cab19c9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33cd808e60f28d510d4b545bd5b8f3f8d3ef19bf88c7264a8c8953d9f4b15b79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nsdz2" podUID="7c6565b5-13e1-473b-b977-3ab4cab19c9a" Oct 29 00:44:37.781121 containerd[1613]: time="2025-10-29T00:44:37.780988695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557bfcb88d-x5jf7,Uid:13741636-ed02-4263-9179-b37fa6d45218,Namespace:calico-system,Attempt:0,}" Oct 29 00:44:37.789677 containerd[1613]: time="2025-10-29T00:44:37.789226594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b8bcbc6f-p7j9q,Uid:6bbe49bd-5153-4d3e-aaf8-870450eb6a27,Namespace:calico-system,Attempt:0,}" Oct 29 00:44:37.800718 kubelet[2761]: E1029 00:44:37.800354 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:37.802943 containerd[1613]: time="2025-10-29T00:44:37.802733508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j799m,Uid:f8a0a8b3-e8df-4562-92d8-5506f107f647,Namespace:kube-system,Attempt:0,}" Oct 29 00:44:37.810609 containerd[1613]: time="2025-10-29T00:44:37.810280756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d776986cb-mkzq4,Uid:a2ccb739-fac3-4c48-b3e6-829701cb5ed8,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:44:37.819949 kubelet[2761]: E1029 00:44:37.819900 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:37.822522 containerd[1613]: time="2025-10-29T00:44:37.822481700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zmpf8,Uid:934dc817-c3aa-48d4-be1a-4f1c33bc437d,Namespace:kube-system,Attempt:0,}" Oct 29 00:44:37.841646 containerd[1613]: time="2025-10-29T00:44:37.841593072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d776986cb-zqjs2,Uid:fc5d07a9-e55f-4ffc-bf62-2522098f23ea,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:44:37.845870 containerd[1613]: time="2025-10-29T00:44:37.845822660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jzb5c,Uid:225130ab-318f-4752-a3f6-b2cc6751e084,Namespace:calico-system,Attempt:0,}" Oct 29 00:44:37.854518 containerd[1613]: time="2025-10-29T00:44:37.854461354Z" level=error msg="Failed to destroy network for sandbox \"13ae0085afb669ec706c27e46beb08d99092ca94481cc5b93edd2e80dd96a80d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.868593 containerd[1613]: time="2025-10-29T00:44:37.868484601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557bfcb88d-x5jf7,Uid:13741636-ed02-4263-9179-b37fa6d45218,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13ae0085afb669ec706c27e46beb08d99092ca94481cc5b93edd2e80dd96a80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.868975 kubelet[2761]: E1029 00:44:37.868922 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13ae0085afb669ec706c27e46beb08d99092ca94481cc5b93edd2e80dd96a80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.869037 kubelet[2761]: E1029 00:44:37.868987 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13ae0085afb669ec706c27e46beb08d99092ca94481cc5b93edd2e80dd96a80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" Oct 29 00:44:37.869037 kubelet[2761]: E1029 00:44:37.869009 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13ae0085afb669ec706c27e46beb08d99092ca94481cc5b93edd2e80dd96a80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" Oct 29 00:44:37.869093 kubelet[2761]: E1029 00:44:37.869059 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-557bfcb88d-x5jf7_calico-system(13741636-ed02-4263-9179-b37fa6d45218)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-557bfcb88d-x5jf7_calico-system(13741636-ed02-4263-9179-b37fa6d45218)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13ae0085afb669ec706c27e46beb08d99092ca94481cc5b93edd2e80dd96a80d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" podUID="13741636-ed02-4263-9179-b37fa6d45218" Oct 29 00:44:37.906466 containerd[1613]: time="2025-10-29T00:44:37.906415416Z" level=error msg="Failed to destroy network for sandbox \"2db5248916f5a8c5aa89e6041a32d5e36d033cb51912a3211c8c119292d2e336\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.908033 containerd[1613]: time="2025-10-29T00:44:37.908001554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j799m,Uid:f8a0a8b3-e8df-4562-92d8-5506f107f647,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db5248916f5a8c5aa89e6041a32d5e36d033cb51912a3211c8c119292d2e336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.908993 kubelet[2761]: E1029 00:44:37.908426 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db5248916f5a8c5aa89e6041a32d5e36d033cb51912a3211c8c119292d2e336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.909479 kubelet[2761]: E1029 00:44:37.909455 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db5248916f5a8c5aa89e6041a32d5e36d033cb51912a3211c8c119292d2e336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j799m" Oct 29 00:44:37.909526 kubelet[2761]: E1029 00:44:37.909484 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2db5248916f5a8c5aa89e6041a32d5e36d033cb51912a3211c8c119292d2e336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j799m" Oct 29 00:44:37.910191 kubelet[2761]: E1029 00:44:37.910147 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-j799m_kube-system(f8a0a8b3-e8df-4562-92d8-5506f107f647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-j799m_kube-system(f8a0a8b3-e8df-4562-92d8-5506f107f647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2db5248916f5a8c5aa89e6041a32d5e36d033cb51912a3211c8c119292d2e336\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-j799m" podUID="f8a0a8b3-e8df-4562-92d8-5506f107f647" Oct 29 00:44:37.915454 containerd[1613]: time="2025-10-29T00:44:37.915401153Z" level=error msg="Failed to destroy network for sandbox \"e36411467c0985010b28ac38d6d5838b88c827276958f5db6b30b4668276d5e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.919209 containerd[1613]: time="2025-10-29T00:44:37.918954066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d776986cb-mkzq4,Uid:a2ccb739-fac3-4c48-b3e6-829701cb5ed8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36411467c0985010b28ac38d6d5838b88c827276958f5db6b30b4668276d5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.919453 kubelet[2761]: E1029 00:44:37.919366 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36411467c0985010b28ac38d6d5838b88c827276958f5db6b30b4668276d5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.919510 kubelet[2761]: E1029 00:44:37.919474 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36411467c0985010b28ac38d6d5838b88c827276958f5db6b30b4668276d5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" Oct 29 00:44:37.919910 kubelet[2761]: E1029 00:44:37.919507 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36411467c0985010b28ac38d6d5838b88c827276958f5db6b30b4668276d5e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" Oct 29 00:44:37.919910 kubelet[2761]: E1029 00:44:37.919560 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d776986cb-mkzq4_calico-apiserver(a2ccb739-fac3-4c48-b3e6-829701cb5ed8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d776986cb-mkzq4_calico-apiserver(a2ccb739-fac3-4c48-b3e6-829701cb5ed8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e36411467c0985010b28ac38d6d5838b88c827276958f5db6b30b4668276d5e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" podUID="a2ccb739-fac3-4c48-b3e6-829701cb5ed8" Oct 29 00:44:37.930731 containerd[1613]: time="2025-10-29T00:44:37.930683422Z" level=error msg="Failed to destroy network for sandbox \"b7720f1291d8fd2022e7d827708d9b767b60ed43fb21c626906571c0b85996de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.932077 containerd[1613]: time="2025-10-29T00:44:37.932024809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b8bcbc6f-p7j9q,Uid:6bbe49bd-5153-4d3e-aaf8-870450eb6a27,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7720f1291d8fd2022e7d827708d9b767b60ed43fb21c626906571c0b85996de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.932285 kubelet[2761]: E1029 00:44:37.932255 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7720f1291d8fd2022e7d827708d9b767b60ed43fb21c626906571c0b85996de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.932380 kubelet[2761]: E1029 00:44:37.932311 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7720f1291d8fd2022e7d827708d9b767b60ed43fb21c626906571c0b85996de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b8bcbc6f-p7j9q" Oct 29 00:44:37.932380 kubelet[2761]: E1029 00:44:37.932334 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7720f1291d8fd2022e7d827708d9b767b60ed43fb21c626906571c0b85996de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b8bcbc6f-p7j9q" Oct 29 00:44:37.932380 kubelet[2761]: E1029 00:44:37.932388 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b8bcbc6f-p7j9q_calico-system(6bbe49bd-5153-4d3e-aaf8-870450eb6a27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b8bcbc6f-p7j9q_calico-system(6bbe49bd-5153-4d3e-aaf8-870450eb6a27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7720f1291d8fd2022e7d827708d9b767b60ed43fb21c626906571c0b85996de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b8bcbc6f-p7j9q" podUID="6bbe49bd-5153-4d3e-aaf8-870450eb6a27" Oct 29 00:44:37.952821 containerd[1613]: time="2025-10-29T00:44:37.952769960Z" level=error msg="Failed to destroy network for sandbox \"010fc5e32719cb48e054096ed41dc527b80f2408e2331808329089b0a9e9a013\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.953391 containerd[1613]: time="2025-10-29T00:44:37.953369449Z" level=error msg="Failed to destroy network for sandbox \"f6b03ab1d8dc6002ae23ea9ecdab5259da5df12833f2c9905d20f063e663f95e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.954765 containerd[1613]: time="2025-10-29T00:44:37.954717328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zmpf8,Uid:934dc817-c3aa-48d4-be1a-4f1c33bc437d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"010fc5e32719cb48e054096ed41dc527b80f2408e2331808329089b0a9e9a013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.955095 kubelet[2761]: E1029 00:44:37.955035 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010fc5e32719cb48e054096ed41dc527b80f2408e2331808329089b0a9e9a013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.955172 kubelet[2761]: E1029 00:44:37.955100 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010fc5e32719cb48e054096ed41dc527b80f2408e2331808329089b0a9e9a013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zmpf8" Oct 29 00:44:37.955172 kubelet[2761]: E1029 00:44:37.955127 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010fc5e32719cb48e054096ed41dc527b80f2408e2331808329089b0a9e9a013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zmpf8" Oct 29 00:44:37.955224 kubelet[2761]: E1029 00:44:37.955177 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zmpf8_kube-system(934dc817-c3aa-48d4-be1a-4f1c33bc437d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zmpf8_kube-system(934dc817-c3aa-48d4-be1a-4f1c33bc437d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"010fc5e32719cb48e054096ed41dc527b80f2408e2331808329089b0a9e9a013\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zmpf8" podUID="934dc817-c3aa-48d4-be1a-4f1c33bc437d" Oct 29 00:44:37.956322 containerd[1613]: time="2025-10-29T00:44:37.956290270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d776986cb-zqjs2,Uid:fc5d07a9-e55f-4ffc-bf62-2522098f23ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b03ab1d8dc6002ae23ea9ecdab5259da5df12833f2c9905d20f063e663f95e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.956817 kubelet[2761]: E1029 00:44:37.956752 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b03ab1d8dc6002ae23ea9ecdab5259da5df12833f2c9905d20f063e663f95e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.956900 kubelet[2761]: E1029 00:44:37.956837 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b03ab1d8dc6002ae23ea9ecdab5259da5df12833f2c9905d20f063e663f95e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" Oct 29 00:44:37.956900 kubelet[2761]: E1029 00:44:37.956861 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b03ab1d8dc6002ae23ea9ecdab5259da5df12833f2c9905d20f063e663f95e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" Oct 29 00:44:37.956992 kubelet[2761]: E1029 00:44:37.956922 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d776986cb-zqjs2_calico-apiserver(fc5d07a9-e55f-4ffc-bf62-2522098f23ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d776986cb-zqjs2_calico-apiserver(fc5d07a9-e55f-4ffc-bf62-2522098f23ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6b03ab1d8dc6002ae23ea9ecdab5259da5df12833f2c9905d20f063e663f95e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" podUID="fc5d07a9-e55f-4ffc-bf62-2522098f23ea" Oct 29 00:44:37.958683 containerd[1613]: time="2025-10-29T00:44:37.958638835Z" level=error msg="Failed to destroy network for sandbox \"35042509f5bd75fd00e9b104e8ece458daa34b48cfab895cdc5ddecd47b4d40f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.960452 containerd[1613]: time="2025-10-29T00:44:37.960403189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jzb5c,Uid:225130ab-318f-4752-a3f6-b2cc6751e084,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35042509f5bd75fd00e9b104e8ece458daa34b48cfab895cdc5ddecd47b4d40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.960626 kubelet[2761]: E1029 00:44:37.960599 2761 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35042509f5bd75fd00e9b104e8ece458daa34b48cfab895cdc5ddecd47b4d40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 00:44:37.960675 kubelet[2761]: E1029 00:44:37.960638 2761 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35042509f5bd75fd00e9b104e8ece458daa34b48cfab895cdc5ddecd47b4d40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-jzb5c" Oct 29 00:44:37.960675 kubelet[2761]: E1029 00:44:37.960654 2761 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35042509f5bd75fd00e9b104e8ece458daa34b48cfab895cdc5ddecd47b4d40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-jzb5c" Oct 29 00:44:37.960730 kubelet[2761]: E1029 00:44:37.960697 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-jzb5c_calico-system(225130ab-318f-4752-a3f6-b2cc6751e084)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-jzb5c_calico-system(225130ab-318f-4752-a3f6-b2cc6751e084)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35042509f5bd75fd00e9b104e8ece458daa34b48cfab895cdc5ddecd47b4d40f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-jzb5c" podUID="225130ab-318f-4752-a3f6-b2cc6751e084" Oct 29 00:44:44.064154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3332706194.mount: Deactivated successfully. Oct 29 00:44:44.196730 containerd[1613]: time="2025-10-29T00:44:44.196660841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:44.211832 containerd[1613]: time="2025-10-29T00:44:44.197391936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 29 00:44:44.211832 containerd[1613]: time="2025-10-29T00:44:44.198417814Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:44.211951 containerd[1613]: time="2025-10-29T00:44:44.200643300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.688705497s" Oct 29 00:44:44.211981 containerd[1613]: time="2025-10-29T00:44:44.211953271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 29 00:44:44.212359 containerd[1613]: time="2025-10-29T00:44:44.212300955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 00:44:44.228845 containerd[1613]: time="2025-10-29T00:44:44.228791790Z" level=info msg="CreateContainer within sandbox \"1dcb476ba973f09a9efbe6f2968a0a92deaead7d4864351c711c29ca664362ed\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 00:44:44.236397 containerd[1613]: time="2025-10-29T00:44:44.236365454Z" level=info msg="Container 5a9874c8e012706d55abea68370e001a4851b8e4e3be9a66ff3f656fa47e379b: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:44.246483 containerd[1613]: time="2025-10-29T00:44:44.246433900Z" level=info msg="CreateContainer within sandbox \"1dcb476ba973f09a9efbe6f2968a0a92deaead7d4864351c711c29ca664362ed\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5a9874c8e012706d55abea68370e001a4851b8e4e3be9a66ff3f656fa47e379b\"" Oct 29 00:44:44.247161 containerd[1613]: time="2025-10-29T00:44:44.247122415Z" level=info msg="StartContainer for \"5a9874c8e012706d55abea68370e001a4851b8e4e3be9a66ff3f656fa47e379b\"" Oct 29 00:44:44.249530 containerd[1613]: time="2025-10-29T00:44:44.248453509Z" level=info msg="connecting to shim 5a9874c8e012706d55abea68370e001a4851b8e4e3be9a66ff3f656fa47e379b" address="unix:///run/containerd/s/88f3c086a046be3e3a0b11bdeb3791c0a7045c2854978c34dc957dc1a24e3af5" protocol=ttrpc version=3 Oct 29 00:44:44.273790 systemd[1]: Started cri-containerd-5a9874c8e012706d55abea68370e001a4851b8e4e3be9a66ff3f656fa47e379b.scope - libcontainer container 5a9874c8e012706d55abea68370e001a4851b8e4e3be9a66ff3f656fa47e379b. Oct 29 00:44:44.318324 containerd[1613]: time="2025-10-29T00:44:44.318212968Z" level=info msg="StartContainer for \"5a9874c8e012706d55abea68370e001a4851b8e4e3be9a66ff3f656fa47e379b\" returns successfully" Oct 29 00:44:44.395628 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 00:44:44.396650 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 00:44:44.528418 kubelet[2761]: E1029 00:44:44.528376 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:44.560487 kubelet[2761]: I1029 00:44:44.560438 2761 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm5jg\" (UniqueName: \"kubernetes.io/projected/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-kube-api-access-hm5jg\") pod \"6bbe49bd-5153-4d3e-aaf8-870450eb6a27\" (UID: \"6bbe49bd-5153-4d3e-aaf8-870450eb6a27\") " Oct 29 00:44:44.560487 kubelet[2761]: I1029 00:44:44.560477 2761 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-whisker-backend-key-pair\") pod \"6bbe49bd-5153-4d3e-aaf8-870450eb6a27\" (UID: \"6bbe49bd-5153-4d3e-aaf8-870450eb6a27\") " Oct 29 00:44:44.560487 kubelet[2761]: I1029 00:44:44.560501 2761 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-whisker-ca-bundle\") pod \"6bbe49bd-5153-4d3e-aaf8-870450eb6a27\" (UID: \"6bbe49bd-5153-4d3e-aaf8-870450eb6a27\") " Oct 29 00:44:44.561437 kubelet[2761]: I1029 00:44:44.561400 2761 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6bbe49bd-5153-4d3e-aaf8-870450eb6a27" (UID: "6bbe49bd-5153-4d3e-aaf8-870450eb6a27"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:44:44.564646 kubelet[2761]: I1029 00:44:44.564617 2761 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6bbe49bd-5153-4d3e-aaf8-870450eb6a27" (UID: "6bbe49bd-5153-4d3e-aaf8-870450eb6a27"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:44:44.564884 kubelet[2761]: I1029 00:44:44.564852 2761 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-kube-api-access-hm5jg" (OuterVolumeSpecName: "kube-api-access-hm5jg") pod "6bbe49bd-5153-4d3e-aaf8-870450eb6a27" (UID: "6bbe49bd-5153-4d3e-aaf8-870450eb6a27"). InnerVolumeSpecName "kube-api-access-hm5jg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:44:44.660886 kubelet[2761]: I1029 00:44:44.660780 2761 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 29 00:44:44.660886 kubelet[2761]: I1029 00:44:44.660820 2761 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm5jg\" (UniqueName: \"kubernetes.io/projected/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-kube-api-access-hm5jg\") on node \"localhost\" DevicePath \"\"" Oct 29 00:44:44.660886 kubelet[2761]: I1029 00:44:44.660830 2761 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bbe49bd-5153-4d3e-aaf8-870450eb6a27-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 29 00:44:44.833618 systemd[1]: Removed slice kubepods-besteffort-pod6bbe49bd_5153_4d3e_aaf8_870450eb6a27.slice - libcontainer container kubepods-besteffort-pod6bbe49bd_5153_4d3e_aaf8_870450eb6a27.slice. Oct 29 00:44:44.844243 kubelet[2761]: I1029 00:44:44.844165 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b8529" podStartSLOduration=2.011894541 podStartE2EDuration="16.844147935s" podCreationTimestamp="2025-10-29 00:44:28 +0000 UTC" firstStartedPulling="2025-10-29 00:44:29.380725166 +0000 UTC m=+21.193957775" lastFinishedPulling="2025-10-29 00:44:44.21297855 +0000 UTC m=+36.026211169" observedRunningTime="2025-10-29 00:44:44.542143223 +0000 UTC m=+36.355375862" watchObservedRunningTime="2025-10-29 00:44:44.844147935 +0000 UTC m=+36.657380554" Oct 29 00:44:44.879954 systemd[1]: Created slice kubepods-besteffort-pod8c66d588_d751_4050_9adc_78c4164786df.slice - libcontainer container kubepods-besteffort-pod8c66d588_d751_4050_9adc_78c4164786df.slice. Oct 29 00:44:44.963197 kubelet[2761]: I1029 00:44:44.963049 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c66d588-d751-4050-9adc-78c4164786df-whisker-backend-key-pair\") pod \"whisker-787c59bf7c-jznwc\" (UID: \"8c66d588-d751-4050-9adc-78c4164786df\") " pod="calico-system/whisker-787c59bf7c-jznwc" Oct 29 00:44:44.963197 kubelet[2761]: I1029 00:44:44.963104 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6xzn\" (UniqueName: \"kubernetes.io/projected/8c66d588-d751-4050-9adc-78c4164786df-kube-api-access-p6xzn\") pod \"whisker-787c59bf7c-jznwc\" (UID: \"8c66d588-d751-4050-9adc-78c4164786df\") " pod="calico-system/whisker-787c59bf7c-jznwc" Oct 29 00:44:44.963197 kubelet[2761]: I1029 00:44:44.963126 2761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c66d588-d751-4050-9adc-78c4164786df-whisker-ca-bundle\") pod \"whisker-787c59bf7c-jznwc\" (UID: \"8c66d588-d751-4050-9adc-78c4164786df\") " pod="calico-system/whisker-787c59bf7c-jznwc" Oct 29 00:44:45.066907 systemd[1]: var-lib-kubelet-pods-6bbe49bd\x2d5153\x2d4d3e\x2daaf8\x2d870450eb6a27-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhm5jg.mount: Deactivated successfully. Oct 29 00:44:45.067038 systemd[1]: var-lib-kubelet-pods-6bbe49bd\x2d5153\x2d4d3e\x2daaf8\x2d870450eb6a27-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 00:44:45.186284 containerd[1613]: time="2025-10-29T00:44:45.186223990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787c59bf7c-jznwc,Uid:8c66d588-d751-4050-9adc-78c4164786df,Namespace:calico-system,Attempt:0,}" Oct 29 00:44:45.338708 systemd-networkd[1502]: calic4171fe03a6: Link UP Oct 29 00:44:45.338982 systemd-networkd[1502]: calic4171fe03a6: Gained carrier Oct 29 00:44:45.355014 containerd[1613]: 2025-10-29 00:44:45.209 [INFO][3950] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 00:44:45.355014 containerd[1613]: 2025-10-29 00:44:45.227 [INFO][3950] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--787c59bf7c--jznwc-eth0 whisker-787c59bf7c- calico-system 8c66d588-d751-4050-9adc-78c4164786df 899 0 2025-10-29 00:44:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:787c59bf7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-787c59bf7c-jznwc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic4171fe03a6 [] [] }} ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Namespace="calico-system" Pod="whisker-787c59bf7c-jznwc" WorkloadEndpoint="localhost-k8s-whisker--787c59bf7c--jznwc-" Oct 29 00:44:45.355014 containerd[1613]: 2025-10-29 00:44:45.227 [INFO][3950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Namespace="calico-system" Pod="whisker-787c59bf7c-jznwc" WorkloadEndpoint="localhost-k8s-whisker--787c59bf7c--jznwc-eth0" Oct 29 00:44:45.355014 containerd[1613]: 2025-10-29 00:44:45.288 [INFO][3965] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" HandleID="k8s-pod-network.98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Workload="localhost-k8s-whisker--787c59bf7c--jznwc-eth0" Oct 29 00:44:45.355515 containerd[1613]: 2025-10-29 00:44:45.291 [INFO][3965] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" HandleID="k8s-pod-network.98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Workload="localhost-k8s-whisker--787c59bf7c--jznwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000e1740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-787c59bf7c-jznwc", "timestamp":"2025-10-29 00:44:45.288464764 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:44:45.355515 containerd[1613]: 2025-10-29 00:44:45.291 [INFO][3965] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:44:45.355515 containerd[1613]: 2025-10-29 00:44:45.291 [INFO][3965] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:44:45.355515 containerd[1613]: 2025-10-29 00:44:45.292 [INFO][3965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:44:45.355515 containerd[1613]: 2025-10-29 00:44:45.301 [INFO][3965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" host="localhost" Oct 29 00:44:45.355515 containerd[1613]: 2025-10-29 00:44:45.307 [INFO][3965] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:44:45.355515 containerd[1613]: 2025-10-29 00:44:45.312 [INFO][3965] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:44:45.355515 containerd[1613]: 2025-10-29 00:44:45.314 [INFO][3965] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:45.355515 containerd[1613]: 2025-10-29 00:44:45.316 [INFO][3965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:45.355515 containerd[1613]: 2025-10-29 00:44:45.316 [INFO][3965] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" host="localhost" Oct 29 00:44:45.355827 containerd[1613]: 2025-10-29 00:44:45.317 [INFO][3965] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d Oct 29 00:44:45.355827 containerd[1613]: 2025-10-29 00:44:45.322 [INFO][3965] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" host="localhost" Oct 29 00:44:45.355827 containerd[1613]: 2025-10-29 00:44:45.327 [INFO][3965] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" host="localhost" Oct 29 00:44:45.355827 containerd[1613]: 2025-10-29 00:44:45.327 [INFO][3965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" host="localhost" Oct 29 00:44:45.355827 containerd[1613]: 2025-10-29 00:44:45.327 [INFO][3965] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:44:45.355827 containerd[1613]: 2025-10-29 00:44:45.327 [INFO][3965] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" HandleID="k8s-pod-network.98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Workload="localhost-k8s-whisker--787c59bf7c--jznwc-eth0" Oct 29 00:44:45.355943 containerd[1613]: 2025-10-29 00:44:45.330 [INFO][3950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Namespace="calico-system" Pod="whisker-787c59bf7c-jznwc" WorkloadEndpoint="localhost-k8s-whisker--787c59bf7c--jznwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--787c59bf7c--jznwc-eth0", GenerateName:"whisker-787c59bf7c-", Namespace:"calico-system", SelfLink:"", UID:"8c66d588-d751-4050-9adc-78c4164786df", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"787c59bf7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-787c59bf7c-jznwc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic4171fe03a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:45.355943 containerd[1613]: 2025-10-29 00:44:45.330 [INFO][3950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Namespace="calico-system" Pod="whisker-787c59bf7c-jznwc" WorkloadEndpoint="localhost-k8s-whisker--787c59bf7c--jznwc-eth0" Oct 29 00:44:45.356023 containerd[1613]: 2025-10-29 00:44:45.330 [INFO][3950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4171fe03a6 ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Namespace="calico-system" Pod="whisker-787c59bf7c-jznwc" WorkloadEndpoint="localhost-k8s-whisker--787c59bf7c--jznwc-eth0" Oct 29 00:44:45.356023 containerd[1613]: 2025-10-29 00:44:45.338 [INFO][3950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Namespace="calico-system" Pod="whisker-787c59bf7c-jznwc" WorkloadEndpoint="localhost-k8s-whisker--787c59bf7c--jznwc-eth0" Oct 29 00:44:45.356080 containerd[1613]: 2025-10-29 00:44:45.339 [INFO][3950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Namespace="calico-system" Pod="whisker-787c59bf7c-jznwc" WorkloadEndpoint="localhost-k8s-whisker--787c59bf7c--jznwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--787c59bf7c--jznwc-eth0", GenerateName:"whisker-787c59bf7c-", Namespace:"calico-system", SelfLink:"", UID:"8c66d588-d751-4050-9adc-78c4164786df", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"787c59bf7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d", Pod:"whisker-787c59bf7c-jznwc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic4171fe03a6", MAC:"c2:74:e6:41:eb:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:45.356133 containerd[1613]: 2025-10-29 00:44:45.348 [INFO][3950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" Namespace="calico-system" Pod="whisker-787c59bf7c-jznwc" WorkloadEndpoint="localhost-k8s-whisker--787c59bf7c--jznwc-eth0" Oct 29 00:44:45.528444 kubelet[2761]: I1029 00:44:45.528396 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:44:45.528960 kubelet[2761]: E1029 00:44:45.528874 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:45.565446 containerd[1613]: time="2025-10-29T00:44:45.565394924Z" level=info msg="connecting to shim 98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d" address="unix:///run/containerd/s/922fa00eb7afd0e716776ffa12d8bacec2b5015929b30b5be18f16ee325fa66c" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:45.590734 systemd[1]: Started cri-containerd-98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d.scope - libcontainer container 98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d. Oct 29 00:44:45.603350 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:44:45.643436 containerd[1613]: time="2025-10-29T00:44:45.642793876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787c59bf7c-jznwc,Uid:8c66d588-d751-4050-9adc-78c4164786df,Namespace:calico-system,Attempt:0,} returns sandbox id \"98593f4ce2ec0975390d910eb592da0ac035a535fa2e53878291e11ad8ffeb3d\"" Oct 29 00:44:45.646889 containerd[1613]: time="2025-10-29T00:44:45.646674222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:44:46.001110 containerd[1613]: time="2025-10-29T00:44:46.000957818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:44:46.120767 containerd[1613]: time="2025-10-29T00:44:46.120685567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:44:46.126566 containerd[1613]: time="2025-10-29T00:44:46.126504267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:44:46.126862 kubelet[2761]: E1029 00:44:46.126809 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:44:46.126989 kubelet[2761]: E1029 00:44:46.126869 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:44:46.127033 kubelet[2761]: E1029 00:44:46.126985 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-787c59bf7c-jznwc_calico-system(8c66d588-d751-4050-9adc-78c4164786df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:44:46.127904 containerd[1613]: time="2025-10-29T00:44:46.127869513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:44:46.287281 kubelet[2761]: I1029 00:44:46.287114 2761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bbe49bd-5153-4d3e-aaf8-870450eb6a27" path="/var/lib/kubelet/pods/6bbe49bd-5153-4d3e-aaf8-870450eb6a27/volumes" Oct 29 00:44:46.455806 systemd-networkd[1502]: calic4171fe03a6: Gained IPv6LL Oct 29 00:44:46.458343 containerd[1613]: time="2025-10-29T00:44:46.458292958Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:44:46.459619 containerd[1613]: time="2025-10-29T00:44:46.459590048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:44:46.459721 containerd[1613]: time="2025-10-29T00:44:46.459606328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:44:46.459969 kubelet[2761]: E1029 00:44:46.459896 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:44:46.459969 kubelet[2761]: E1029 00:44:46.459959 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:44:46.460107 kubelet[2761]: E1029 00:44:46.460059 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-787c59bf7c-jznwc_calico-system(8c66d588-d751-4050-9adc-78c4164786df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:44:46.460201 kubelet[2761]: E1029 00:44:46.460111 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787c59bf7c-jznwc" podUID="8c66d588-d751-4050-9adc-78c4164786df" Oct 29 00:44:46.533115 kubelet[2761]: E1029 00:44:46.532471 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787c59bf7c-jznwc" podUID="8c66d588-d751-4050-9adc-78c4164786df" Oct 29 00:44:47.068467 kubelet[2761]: I1029 00:44:47.068405 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:44:47.068919 kubelet[2761]: E1029 00:44:47.068895 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:47.534104 kubelet[2761]: E1029 00:44:47.533727 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:47.535693 kubelet[2761]: E1029 00:44:47.535648 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787c59bf7c-jznwc" podUID="8c66d588-d751-4050-9adc-78c4164786df" Oct 29 00:44:48.129351 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:38356.service - OpenSSH per-connection server daemon (10.0.0.1:38356). Oct 29 00:44:48.209808 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 38356 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:44:48.211507 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:44:48.216369 systemd-logind[1578]: New session 8 of user core. Oct 29 00:44:48.224801 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 00:44:48.378133 sshd[4222]: Connection closed by 10.0.0.1 port 38356 Oct 29 00:44:48.379966 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Oct 29 00:44:48.386037 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:38356.service: Deactivated successfully. Oct 29 00:44:48.386961 systemd-networkd[1502]: vxlan.calico: Link UP Oct 29 00:44:48.386975 systemd-networkd[1502]: vxlan.calico: Gained carrier Oct 29 00:44:48.389839 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 00:44:48.391386 systemd-logind[1578]: Session 8 logged out. Waiting for processes to exit. Oct 29 00:44:48.395616 systemd-logind[1578]: Removed session 8. Oct 29 00:44:49.289035 kubelet[2761]: E1029 00:44:49.288976 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:49.289675 containerd[1613]: time="2025-10-29T00:44:49.289414901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zmpf8,Uid:934dc817-c3aa-48d4-be1a-4f1c33bc437d,Namespace:kube-system,Attempt:0,}" Oct 29 00:44:49.406301 systemd-networkd[1502]: calif2ff1159d13: Link UP Oct 29 00:44:49.406988 systemd-networkd[1502]: calif2ff1159d13: Gained carrier Oct 29 00:44:49.423814 containerd[1613]: 2025-10-29 00:44:49.330 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--zmpf8-eth0 coredns-66bc5c9577- kube-system 934dc817-c3aa-48d4-be1a-4f1c33bc437d 831 0 2025-10-29 00:44:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-zmpf8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2ff1159d13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Namespace="kube-system" Pod="coredns-66bc5c9577-zmpf8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zmpf8-" Oct 29 00:44:49.423814 containerd[1613]: 2025-10-29 00:44:49.330 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Namespace="kube-system" Pod="coredns-66bc5c9577-zmpf8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zmpf8-eth0" Oct 29 00:44:49.423814 containerd[1613]: 2025-10-29 00:44:49.358 [INFO][4328] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" HandleID="k8s-pod-network.d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Workload="localhost-k8s-coredns--66bc5c9577--zmpf8-eth0" Oct 29 00:44:49.424005 containerd[1613]: 2025-10-29 00:44:49.358 [INFO][4328] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" HandleID="k8s-pod-network.d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Workload="localhost-k8s-coredns--66bc5c9577--zmpf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df100), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-zmpf8", "timestamp":"2025-10-29 00:44:49.358749338 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:44:49.424005 containerd[1613]: 2025-10-29 00:44:49.358 [INFO][4328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:44:49.424005 containerd[1613]: 2025-10-29 00:44:49.359 [INFO][4328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:44:49.424005 containerd[1613]: 2025-10-29 00:44:49.359 [INFO][4328] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:44:49.424005 containerd[1613]: 2025-10-29 00:44:49.365 [INFO][4328] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" host="localhost" Oct 29 00:44:49.424005 containerd[1613]: 2025-10-29 00:44:49.370 [INFO][4328] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:44:49.424005 containerd[1613]: 2025-10-29 00:44:49.373 [INFO][4328] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:44:49.424005 containerd[1613]: 2025-10-29 00:44:49.375 [INFO][4328] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:49.424005 containerd[1613]: 2025-10-29 00:44:49.377 [INFO][4328] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:49.424005 containerd[1613]: 2025-10-29 00:44:49.377 [INFO][4328] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" host="localhost" Oct 29 00:44:49.424215 containerd[1613]: 2025-10-29 00:44:49.379 [INFO][4328] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5 Oct 29 00:44:49.424215 containerd[1613]: 2025-10-29 00:44:49.382 [INFO][4328] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" host="localhost" Oct 29 00:44:49.424215 containerd[1613]: 2025-10-29 00:44:49.399 [INFO][4328] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" host="localhost" Oct 29 00:44:49.424215 containerd[1613]: 2025-10-29 00:44:49.399 [INFO][4328] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" host="localhost" Oct 29 00:44:49.424215 containerd[1613]: 2025-10-29 00:44:49.399 [INFO][4328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:44:49.424215 containerd[1613]: 2025-10-29 00:44:49.399 [INFO][4328] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" HandleID="k8s-pod-network.d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Workload="localhost-k8s-coredns--66bc5c9577--zmpf8-eth0" Oct 29 00:44:49.424325 containerd[1613]: 2025-10-29 00:44:49.403 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Namespace="kube-system" Pod="coredns-66bc5c9577-zmpf8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zmpf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zmpf8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"934dc817-c3aa-48d4-be1a-4f1c33bc437d", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-zmpf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2ff1159d13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:49.424325 containerd[1613]: 2025-10-29 00:44:49.403 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Namespace="kube-system" Pod="coredns-66bc5c9577-zmpf8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zmpf8-eth0" Oct 29 00:44:49.424325 containerd[1613]: 2025-10-29 00:44:49.403 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2ff1159d13 ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Namespace="kube-system" Pod="coredns-66bc5c9577-zmpf8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zmpf8-eth0" Oct 29 00:44:49.424325 containerd[1613]: 2025-10-29 00:44:49.407 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Namespace="kube-system" Pod="coredns-66bc5c9577-zmpf8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zmpf8-eth0" Oct 29 00:44:49.424325 containerd[1613]: 2025-10-29 00:44:49.407 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Namespace="kube-system" Pod="coredns-66bc5c9577-zmpf8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zmpf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zmpf8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"934dc817-c3aa-48d4-be1a-4f1c33bc437d", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5", Pod:"coredns-66bc5c9577-zmpf8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2ff1159d13", MAC:"82:88:31:ea:3a:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:49.424325 containerd[1613]: 2025-10-29 00:44:49.418 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" Namespace="kube-system" Pod="coredns-66bc5c9577-zmpf8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zmpf8-eth0" Oct 29 00:44:49.446664 containerd[1613]: time="2025-10-29T00:44:49.446606084Z" level=info msg="connecting to shim d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5" address="unix:///run/containerd/s/5ca381d60bf2bd047e30305b68e18950d4cd96922ad3a8d267d1a0b007d8580e" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:49.463757 systemd-networkd[1502]: vxlan.calico: Gained IPv6LL Oct 29 00:44:49.477760 systemd[1]: Started cri-containerd-d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5.scope - libcontainer container d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5. Oct 29 00:44:49.492707 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:44:49.543931 containerd[1613]: time="2025-10-29T00:44:49.543806939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zmpf8,Uid:934dc817-c3aa-48d4-be1a-4f1c33bc437d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5\"" Oct 29 00:44:49.544757 kubelet[2761]: E1029 00:44:49.544508 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:49.703120 containerd[1613]: time="2025-10-29T00:44:49.703068222Z" level=info msg="CreateContainer within sandbox \"d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:44:49.715747 containerd[1613]: time="2025-10-29T00:44:49.715665586Z" level=info msg="Container 2b513e1cf3b9b7696ec129629bb6a6948dd5c09e1f7cf30de04e188e4adb9f97: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:49.718863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610153869.mount: Deactivated successfully. Oct 29 00:44:49.728057 containerd[1613]: time="2025-10-29T00:44:49.728009723Z" level=info msg="CreateContainer within sandbox \"d6154a0cacc5a028a78986c38f26691d74476309ea92b4ee46ea47411b485ad5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b513e1cf3b9b7696ec129629bb6a6948dd5c09e1f7cf30de04e188e4adb9f97\"" Oct 29 00:44:49.728657 containerd[1613]: time="2025-10-29T00:44:49.728623437Z" level=info msg="StartContainer for \"2b513e1cf3b9b7696ec129629bb6a6948dd5c09e1f7cf30de04e188e4adb9f97\"" Oct 29 00:44:49.729595 containerd[1613]: time="2025-10-29T00:44:49.729542724Z" level=info msg="connecting to shim 2b513e1cf3b9b7696ec129629bb6a6948dd5c09e1f7cf30de04e188e4adb9f97" address="unix:///run/containerd/s/5ca381d60bf2bd047e30305b68e18950d4cd96922ad3a8d267d1a0b007d8580e" protocol=ttrpc version=3 Oct 29 00:44:49.749720 systemd[1]: Started cri-containerd-2b513e1cf3b9b7696ec129629bb6a6948dd5c09e1f7cf30de04e188e4adb9f97.scope - libcontainer container 2b513e1cf3b9b7696ec129629bb6a6948dd5c09e1f7cf30de04e188e4adb9f97. Oct 29 00:44:49.781491 containerd[1613]: time="2025-10-29T00:44:49.781451855Z" level=info msg="StartContainer for \"2b513e1cf3b9b7696ec129629bb6a6948dd5c09e1f7cf30de04e188e4adb9f97\" returns successfully" Oct 29 00:44:50.287328 containerd[1613]: time="2025-10-29T00:44:50.287278793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d776986cb-zqjs2,Uid:fc5d07a9-e55f-4ffc-bf62-2522098f23ea,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:44:50.399343 systemd-networkd[1502]: cali3354ebe9020: Link UP Oct 29 00:44:50.400173 systemd-networkd[1502]: cali3354ebe9020: Gained carrier Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.324 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0 calico-apiserver-6d776986cb- calico-apiserver fc5d07a9-e55f-4ffc-bf62-2522098f23ea 832 0 2025-10-29 00:44:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d776986cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d776986cb-zqjs2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3354ebe9020 [] [] }} ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-zqjs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.324 [INFO][4429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-zqjs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.352 [INFO][4443] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" HandleID="k8s-pod-network.9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Workload="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.352 [INFO][4443] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" HandleID="k8s-pod-network.9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Workload="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d776986cb-zqjs2", "timestamp":"2025-10-29 00:44:50.352303646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.352 [INFO][4443] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.352 [INFO][4443] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.352 [INFO][4443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.360 [INFO][4443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" host="localhost" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.365 [INFO][4443] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.370 [INFO][4443] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.372 [INFO][4443] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.374 [INFO][4443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.374 [INFO][4443] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" host="localhost" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.375 [INFO][4443] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.379 [INFO][4443] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" host="localhost" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.390 [INFO][4443] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" host="localhost" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.390 [INFO][4443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" host="localhost" Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.390 [INFO][4443] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:44:50.413941 containerd[1613]: 2025-10-29 00:44:50.390 [INFO][4443] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" HandleID="k8s-pod-network.9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Workload="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0" Oct 29 00:44:50.414818 containerd[1613]: 2025-10-29 00:44:50.395 [INFO][4429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-zqjs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0", GenerateName:"calico-apiserver-6d776986cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc5d07a9-e55f-4ffc-bf62-2522098f23ea", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d776986cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d776986cb-zqjs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3354ebe9020", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:50.414818 containerd[1613]: 2025-10-29 00:44:50.395 [INFO][4429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-zqjs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0" Oct 29 00:44:50.414818 containerd[1613]: 2025-10-29 00:44:50.395 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3354ebe9020 ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-zqjs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0" Oct 29 00:44:50.414818 containerd[1613]: 2025-10-29 00:44:50.398 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-zqjs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0" Oct 29 00:44:50.414818 containerd[1613]: 2025-10-29 00:44:50.399 [INFO][4429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-zqjs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0", GenerateName:"calico-apiserver-6d776986cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc5d07a9-e55f-4ffc-bf62-2522098f23ea", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d776986cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be", Pod:"calico-apiserver-6d776986cb-zqjs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3354ebe9020", MAC:"b6:f4:9d:3e:e5:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:50.414818 containerd[1613]: 2025-10-29 00:44:50.410 [INFO][4429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-zqjs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--zqjs2-eth0" Oct 29 00:44:50.440917 containerd[1613]: time="2025-10-29T00:44:50.440858943Z" level=info msg="connecting to shim 9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be" address="unix:///run/containerd/s/070f00cb72ecd9986a6ff37b3d854fd029a78cb9c108d4e2477d67ac78ae2b0e" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:50.494733 systemd[1]: Started cri-containerd-9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be.scope - libcontainer container 9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be. Oct 29 00:44:50.511259 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:44:50.543565 kubelet[2761]: E1029 00:44:50.543536 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:50.544433 containerd[1613]: time="2025-10-29T00:44:50.544262258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d776986cb-zqjs2,Uid:fc5d07a9-e55f-4ffc-bf62-2522098f23ea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9c65707d1b14411c7ab4454d6e326c5227f8174e4824449b5850e89f3b27c1be\"" Oct 29 00:44:50.545906 containerd[1613]: time="2025-10-29T00:44:50.545877574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:44:50.555330 kubelet[2761]: I1029 00:44:50.555272 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zmpf8" podStartSLOduration=36.555257278 podStartE2EDuration="36.555257278s" podCreationTimestamp="2025-10-29 00:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:44:50.554375952 +0000 UTC m=+42.367608571" watchObservedRunningTime="2025-10-29 00:44:50.555257278 +0000 UTC m=+42.368489897" Oct 29 00:44:50.882510 containerd[1613]: time="2025-10-29T00:44:50.882374084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:44:50.883593 containerd[1613]: time="2025-10-29T00:44:50.883532100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:44:50.883650 containerd[1613]: time="2025-10-29T00:44:50.883611961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:44:50.883825 kubelet[2761]: E1029 00:44:50.883781 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:44:50.883879 kubelet[2761]: E1029 00:44:50.883827 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:44:50.883942 kubelet[2761]: E1029 00:44:50.883910 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6d776986cb-zqjs2_calico-apiserver(fc5d07a9-e55f-4ffc-bf62-2522098f23ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:44:50.883983 kubelet[2761]: E1029 00:44:50.883944 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" podUID="fc5d07a9-e55f-4ffc-bf62-2522098f23ea" Oct 29 00:44:50.935819 systemd-networkd[1502]: calif2ff1159d13: Gained IPv6LL Oct 29 00:44:51.099964 kubelet[2761]: I1029 00:44:51.099908 2761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:44:51.100375 kubelet[2761]: E1029 00:44:51.100335 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:51.194839 containerd[1613]: time="2025-10-29T00:44:51.194553269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a9874c8e012706d55abea68370e001a4851b8e4e3be9a66ff3f656fa47e379b\" id:\"c063a7aa193ad1b83a05c5ee839e6c9d1f50231e37731e968063bd8ad9fe71d4\" pid:4518 exited_at:{seconds:1761698691 nanos:194215765}" Oct 29 00:44:51.290773 containerd[1613]: time="2025-10-29T00:44:51.290694838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a9874c8e012706d55abea68370e001a4851b8e4e3be9a66ff3f656fa47e379b\" id:\"72b98937f113b9a137cd703c536831db4183c9a703ba8ce95fc245aa395933ca\" pid:4543 exited_at:{seconds:1761698691 nanos:290296089}" Oct 29 00:44:51.298602 containerd[1613]: time="2025-10-29T00:44:51.298538725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jzb5c,Uid:225130ab-318f-4752-a3f6-b2cc6751e084,Namespace:calico-system,Attempt:0,}" Oct 29 00:44:51.401147 systemd-networkd[1502]: calib3de63a815a: Link UP Oct 29 00:44:51.401972 systemd-networkd[1502]: calib3de63a815a: Gained carrier Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.334 [INFO][4555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--jzb5c-eth0 goldmane-7c778bb748- calico-system 225130ab-318f-4752-a3f6-b2cc6751e084 833 0 2025-10-29 00:44:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-jzb5c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib3de63a815a [] [] }} ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Namespace="calico-system" Pod="goldmane-7c778bb748-jzb5c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--jzb5c-" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.335 [INFO][4555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Namespace="calico-system" Pod="goldmane-7c778bb748-jzb5c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--jzb5c-eth0" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.360 [INFO][4568] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" HandleID="k8s-pod-network.276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Workload="localhost-k8s-goldmane--7c778bb748--jzb5c-eth0" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.360 [INFO][4568] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" HandleID="k8s-pod-network.276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Workload="localhost-k8s-goldmane--7c778bb748--jzb5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-jzb5c", "timestamp":"2025-10-29 00:44:51.360623659 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.360 [INFO][4568] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.360 [INFO][4568] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.360 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.367 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" host="localhost" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.373 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.377 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.379 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.381 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.381 [INFO][4568] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" host="localhost" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.382 [INFO][4568] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.386 [INFO][4568] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" host="localhost" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.394 [INFO][4568] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" host="localhost" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.394 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" host="localhost" Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.394 [INFO][4568] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:44:51.420810 containerd[1613]: 2025-10-29 00:44:51.394 [INFO][4568] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" HandleID="k8s-pod-network.276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Workload="localhost-k8s-goldmane--7c778bb748--jzb5c-eth0" Oct 29 00:44:51.421763 containerd[1613]: 2025-10-29 00:44:51.397 [INFO][4555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Namespace="calico-system" Pod="goldmane-7c778bb748-jzb5c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--jzb5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--jzb5c-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"225130ab-318f-4752-a3f6-b2cc6751e084", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-jzb5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib3de63a815a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:51.421763 containerd[1613]: 2025-10-29 00:44:51.397 [INFO][4555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Namespace="calico-system" Pod="goldmane-7c778bb748-jzb5c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--jzb5c-eth0" Oct 29 00:44:51.421763 containerd[1613]: 2025-10-29 00:44:51.398 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3de63a815a ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Namespace="calico-system" Pod="goldmane-7c778bb748-jzb5c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--jzb5c-eth0" Oct 29 00:44:51.421763 containerd[1613]: 2025-10-29 00:44:51.403 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Namespace="calico-system" Pod="goldmane-7c778bb748-jzb5c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--jzb5c-eth0" Oct 29 00:44:51.421763 containerd[1613]: 2025-10-29 00:44:51.404 [INFO][4555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Namespace="calico-system" Pod="goldmane-7c778bb748-jzb5c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--jzb5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--jzb5c-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"225130ab-318f-4752-a3f6-b2cc6751e084", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd", Pod:"goldmane-7c778bb748-jzb5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib3de63a815a", MAC:"12:21:c7:e5:0d:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:51.421763 containerd[1613]: 2025-10-29 00:44:51.417 [INFO][4555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" Namespace="calico-system" Pod="goldmane-7c778bb748-jzb5c" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--jzb5c-eth0" Oct 29 00:44:51.443388 containerd[1613]: time="2025-10-29T00:44:51.443337981Z" level=info msg="connecting to shim 276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd" address="unix:///run/containerd/s/a80920dec25ae7d1f62ea300db9fb604d08dd16f4fd30b7da3f03634f43ea2fc" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:51.474744 systemd[1]: Started cri-containerd-276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd.scope - libcontainer container 276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd. Oct 29 00:44:51.490469 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:44:51.523665 containerd[1613]: time="2025-10-29T00:44:51.523605487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jzb5c,Uid:225130ab-318f-4752-a3f6-b2cc6751e084,Namespace:calico-system,Attempt:0,} returns sandbox id \"276dc7baefe729c095970ef44ea5dd30c1b6c584c246877a212ea619e22984fd\"" Oct 29 00:44:51.525184 containerd[1613]: time="2025-10-29T00:44:51.524961133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:44:51.546983 kubelet[2761]: E1029 00:44:51.546948 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:51.547500 kubelet[2761]: E1029 00:44:51.547077 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:51.547787 kubelet[2761]: E1029 00:44:51.547690 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" podUID="fc5d07a9-e55f-4ffc-bf62-2522098f23ea" Oct 29 00:44:51.882345 containerd[1613]: time="2025-10-29T00:44:51.882282844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:44:51.884196 containerd[1613]: time="2025-10-29T00:44:51.884138269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:44:51.884196 containerd[1613]: time="2025-10-29T00:44:51.884179727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:44:51.884397 kubelet[2761]: E1029 00:44:51.884353 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:44:51.884468 kubelet[2761]: E1029 00:44:51.884397 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:44:51.884515 kubelet[2761]: E1029 00:44:51.884469 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jzb5c_calico-system(225130ab-318f-4752-a3f6-b2cc6751e084): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:44:51.884515 kubelet[2761]: E1029 00:44:51.884499 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jzb5c" podUID="225130ab-318f-4752-a3f6-b2cc6751e084" Oct 29 00:44:52.023751 systemd-networkd[1502]: cali3354ebe9020: Gained IPv6LL Oct 29 00:44:52.288761 containerd[1613]: time="2025-10-29T00:44:52.288636633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nsdz2,Uid:7c6565b5-13e1-473b-b977-3ab4cab19c9a,Namespace:calico-system,Attempt:0,}" Oct 29 00:44:52.289945 containerd[1613]: time="2025-10-29T00:44:52.289855132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557bfcb88d-x5jf7,Uid:13741636-ed02-4263-9179-b37fa6d45218,Namespace:calico-system,Attempt:0,}" Oct 29 00:44:52.292587 containerd[1613]: time="2025-10-29T00:44:52.292532812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d776986cb-mkzq4,Uid:a2ccb739-fac3-4c48-b3e6-829701cb5ed8,Namespace:calico-apiserver,Attempt:0,}" Oct 29 00:44:52.295045 kubelet[2761]: E1029 00:44:52.294915 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:52.296383 containerd[1613]: time="2025-10-29T00:44:52.296355062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j799m,Uid:f8a0a8b3-e8df-4562-92d8-5506f107f647,Namespace:kube-system,Attempt:0,}" Oct 29 00:44:52.454626 systemd-networkd[1502]: cali1c1d3fea121: Link UP Oct 29 00:44:52.456643 systemd-networkd[1502]: cali1c1d3fea121: Gained carrier Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.350 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nsdz2-eth0 csi-node-driver- calico-system 7c6565b5-13e1-473b-b977-3ab4cab19c9a 718 0 2025-10-29 00:44:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nsdz2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1c1d3fea121 [] [] }} ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Namespace="calico-system" Pod="csi-node-driver-nsdz2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nsdz2-" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.351 [INFO][4636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Namespace="calico-system" Pod="csi-node-driver-nsdz2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nsdz2-eth0" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.392 [INFO][4697] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" HandleID="k8s-pod-network.40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Workload="localhost-k8s-csi--node--driver--nsdz2-eth0" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.393 [INFO][4697] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" HandleID="k8s-pod-network.40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Workload="localhost-k8s-csi--node--driver--nsdz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034f6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nsdz2", "timestamp":"2025-10-29 00:44:52.39267177 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.393 [INFO][4697] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.393 [INFO][4697] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.393 [INFO][4697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.406 [INFO][4697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" host="localhost" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.411 [INFO][4697] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.419 [INFO][4697] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.421 [INFO][4697] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.423 [INFO][4697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.423 [INFO][4697] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" host="localhost" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.425 [INFO][4697] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499 Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.429 [INFO][4697] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" host="localhost" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.438 [INFO][4697] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" host="localhost" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.438 [INFO][4697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" host="localhost" Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.438 [INFO][4697] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:44:52.478475 containerd[1613]: 2025-10-29 00:44:52.438 [INFO][4697] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" HandleID="k8s-pod-network.40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Workload="localhost-k8s-csi--node--driver--nsdz2-eth0" Oct 29 00:44:52.479444 containerd[1613]: 2025-10-29 00:44:52.445 [INFO][4636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Namespace="calico-system" Pod="csi-node-driver-nsdz2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nsdz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nsdz2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c6565b5-13e1-473b-b977-3ab4cab19c9a", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nsdz2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c1d3fea121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:52.479444 containerd[1613]: 2025-10-29 00:44:52.445 [INFO][4636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Namespace="calico-system" Pod="csi-node-driver-nsdz2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nsdz2-eth0" Oct 29 00:44:52.479444 containerd[1613]: 2025-10-29 00:44:52.446 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c1d3fea121 ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Namespace="calico-system" Pod="csi-node-driver-nsdz2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nsdz2-eth0" Oct 29 00:44:52.479444 containerd[1613]: 2025-10-29 00:44:52.456 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Namespace="calico-system" Pod="csi-node-driver-nsdz2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nsdz2-eth0" Oct 29 00:44:52.479444 containerd[1613]: 2025-10-29 00:44:52.456 [INFO][4636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Namespace="calico-system" Pod="csi-node-driver-nsdz2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nsdz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nsdz2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c6565b5-13e1-473b-b977-3ab4cab19c9a", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499", Pod:"csi-node-driver-nsdz2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c1d3fea121", MAC:"ce:26:46:ae:d0:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:52.479444 containerd[1613]: 2025-10-29 00:44:52.467 [INFO][4636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" Namespace="calico-system" Pod="csi-node-driver-nsdz2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nsdz2-eth0" Oct 29 00:44:52.512361 containerd[1613]: time="2025-10-29T00:44:52.512306312Z" level=info msg="connecting to shim 40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499" address="unix:///run/containerd/s/7b849a5f00162a5e29b7b120eda0202f4268b766c5cb9394c7294e7196f27821" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:52.546782 systemd[1]: Started cri-containerd-40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499.scope - libcontainer container 40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499. Oct 29 00:44:52.551323 kubelet[2761]: E1029 00:44:52.551288 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:52.552329 kubelet[2761]: E1029 00:44:52.552262 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" podUID="fc5d07a9-e55f-4ffc-bf62-2522098f23ea" Oct 29 00:44:52.553399 kubelet[2761]: E1029 00:44:52.553334 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jzb5c" podUID="225130ab-318f-4752-a3f6-b2cc6751e084" Oct 29 00:44:52.558910 systemd-networkd[1502]: cali364cff680e7: Link UP Oct 29 00:44:52.562607 systemd-networkd[1502]: cali364cff680e7: Gained carrier Oct 29 00:44:52.573862 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.368 [INFO][4673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0 calico-apiserver-6d776986cb- calico-apiserver a2ccb739-fac3-4c48-b3e6-829701cb5ed8 830 0 2025-10-29 00:44:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d776986cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d776986cb-mkzq4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali364cff680e7 [] [] }} ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-mkzq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.368 [INFO][4673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-mkzq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.414 [INFO][4705] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" HandleID="k8s-pod-network.583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Workload="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.414 [INFO][4705] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" HandleID="k8s-pod-network.583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Workload="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000522ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d776986cb-mkzq4", "timestamp":"2025-10-29 00:44:52.414318826 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.415 [INFO][4705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.438 [INFO][4705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.438 [INFO][4705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.507 [INFO][4705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" host="localhost" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.514 [INFO][4705] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.521 [INFO][4705] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.524 [INFO][4705] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.528 [INFO][4705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.529 [INFO][4705] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" host="localhost" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.531 [INFO][4705] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.538 [INFO][4705] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" host="localhost" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.543 [INFO][4705] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" host="localhost" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.543 [INFO][4705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" host="localhost" Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.544 [INFO][4705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:44:52.586603 containerd[1613]: 2025-10-29 00:44:52.544 [INFO][4705] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" HandleID="k8s-pod-network.583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Workload="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0" Oct 29 00:44:52.587156 containerd[1613]: 2025-10-29 00:44:52.549 [INFO][4673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-mkzq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0", GenerateName:"calico-apiserver-6d776986cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2ccb739-fac3-4c48-b3e6-829701cb5ed8", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d776986cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d776986cb-mkzq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali364cff680e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:52.587156 containerd[1613]: 2025-10-29 00:44:52.549 [INFO][4673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-mkzq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0" Oct 29 00:44:52.587156 containerd[1613]: 2025-10-29 00:44:52.549 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali364cff680e7 ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-mkzq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0" Oct 29 00:44:52.587156 containerd[1613]: 2025-10-29 00:44:52.571 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-mkzq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0" Oct 29 00:44:52.587156 containerd[1613]: 2025-10-29 00:44:52.572 [INFO][4673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-mkzq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0", GenerateName:"calico-apiserver-6d776986cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2ccb739-fac3-4c48-b3e6-829701cb5ed8", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d776986cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f", Pod:"calico-apiserver-6d776986cb-mkzq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali364cff680e7", MAC:"de:6a:6a:52:88:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:52.587156 containerd[1613]: 2025-10-29 00:44:52.580 [INFO][4673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" Namespace="calico-apiserver" Pod="calico-apiserver-6d776986cb-mkzq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d776986cb--mkzq4-eth0" Oct 29 00:44:52.606841 containerd[1613]: time="2025-10-29T00:44:52.606794224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nsdz2,Uid:7c6565b5-13e1-473b-b977-3ab4cab19c9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"40e15a2df38b394ebcc0b09ae4e257f4a085432c26d295e288afe1462f4ec499\"" Oct 29 00:44:52.610306 containerd[1613]: time="2025-10-29T00:44:52.610238324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:44:52.619680 containerd[1613]: time="2025-10-29T00:44:52.619550157Z" level=info msg="connecting to shim 583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f" address="unix:///run/containerd/s/e1db6d72367f62a70cc018e94fb2548bf17f5f6f4383afcc3500e1b8b9b4c151" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:52.652834 systemd[1]: Started cri-containerd-583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f.scope - libcontainer container 583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f. Oct 29 00:44:52.658499 systemd-networkd[1502]: cali68310fa3c0e: Link UP Oct 29 00:44:52.661270 systemd-networkd[1502]: cali68310fa3c0e: Gained carrier Oct 29 00:44:52.667867 systemd-networkd[1502]: calib3de63a815a: Gained IPv6LL Oct 29 00:44:52.679035 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.406 [INFO][4649] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0 calico-kube-controllers-557bfcb88d- calico-system 13741636-ed02-4263-9179-b37fa6d45218 821 0 2025-10-29 00:44:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:557bfcb88d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-557bfcb88d-x5jf7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali68310fa3c0e [] [] }} ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Namespace="calico-system" Pod="calico-kube-controllers-557bfcb88d-x5jf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.406 [INFO][4649] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Namespace="calico-system" Pod="calico-kube-controllers-557bfcb88d-x5jf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.460 [INFO][4720] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" HandleID="k8s-pod-network.bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Workload="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.461 [INFO][4720] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" HandleID="k8s-pod-network.bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Workload="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000119870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-557bfcb88d-x5jf7", "timestamp":"2025-10-29 00:44:52.460543511 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.461 [INFO][4720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.544 [INFO][4720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.544 [INFO][4720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.607 [INFO][4720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" host="localhost" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.615 [INFO][4720] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.621 [INFO][4720] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.624 [INFO][4720] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.626 [INFO][4720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.626 [INFO][4720] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" host="localhost" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.628 [INFO][4720] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9 Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.632 [INFO][4720] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" host="localhost" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.646 [INFO][4720] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" host="localhost" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.646 [INFO][4720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" host="localhost" Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.646 [INFO][4720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:44:52.685646 containerd[1613]: 2025-10-29 00:44:52.646 [INFO][4720] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" HandleID="k8s-pod-network.bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Workload="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0" Oct 29 00:44:52.686474 containerd[1613]: 2025-10-29 00:44:52.649 [INFO][4649] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Namespace="calico-system" Pod="calico-kube-controllers-557bfcb88d-x5jf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0", GenerateName:"calico-kube-controllers-557bfcb88d-", Namespace:"calico-system", SelfLink:"", UID:"13741636-ed02-4263-9179-b37fa6d45218", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"557bfcb88d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-557bfcb88d-x5jf7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68310fa3c0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:52.686474 containerd[1613]: 2025-10-29 00:44:52.649 [INFO][4649] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Namespace="calico-system" Pod="calico-kube-controllers-557bfcb88d-x5jf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0" Oct 29 00:44:52.686474 containerd[1613]: 2025-10-29 00:44:52.649 [INFO][4649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68310fa3c0e ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Namespace="calico-system" Pod="calico-kube-controllers-557bfcb88d-x5jf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0" Oct 29 00:44:52.686474 containerd[1613]: 2025-10-29 00:44:52.658 [INFO][4649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Namespace="calico-system" Pod="calico-kube-controllers-557bfcb88d-x5jf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0" Oct 29 00:44:52.686474 containerd[1613]: 2025-10-29 00:44:52.662 [INFO][4649] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Namespace="calico-system" Pod="calico-kube-controllers-557bfcb88d-x5jf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0", GenerateName:"calico-kube-controllers-557bfcb88d-", Namespace:"calico-system", SelfLink:"", UID:"13741636-ed02-4263-9179-b37fa6d45218", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"557bfcb88d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9", Pod:"calico-kube-controllers-557bfcb88d-x5jf7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68310fa3c0e", MAC:"66:ce:40:40:ca:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:52.686474 containerd[1613]: 2025-10-29 00:44:52.682 [INFO][4649] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" Namespace="calico-system" Pod="calico-kube-controllers-557bfcb88d-x5jf7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--557bfcb88d--x5jf7-eth0" Oct 29 00:44:52.763805 systemd-networkd[1502]: calia69b8b05cba: Link UP Oct 29 00:44:52.765492 systemd-networkd[1502]: calia69b8b05cba: Gained carrier Oct 29 00:44:52.774020 containerd[1613]: time="2025-10-29T00:44:52.773193907Z" level=info msg="connecting to shim bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9" address="unix:///run/containerd/s/270c2d55ac3aa2483ef4374766161fd5578bf104eb588342f66b8e898898f925" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:52.775437 containerd[1613]: time="2025-10-29T00:44:52.775392778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d776986cb-mkzq4,Uid:a2ccb739-fac3-4c48-b3e6-829701cb5ed8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"583f912a1246c3a7e51094b0dbc273a7283b600ce08313e1ad0511f228c4853f\"" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.410 [INFO][4663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--j799m-eth0 coredns-66bc5c9577- kube-system f8a0a8b3-e8df-4562-92d8-5506f107f647 825 0 2025-10-29 00:44:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-j799m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia69b8b05cba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Namespace="kube-system" Pod="coredns-66bc5c9577-j799m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j799m-" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.410 [INFO][4663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Namespace="kube-system" Pod="coredns-66bc5c9577-j799m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j799m-eth0" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.484 [INFO][4726] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" HandleID="k8s-pod-network.c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Workload="localhost-k8s-coredns--66bc5c9577--j799m-eth0" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.484 [INFO][4726] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" HandleID="k8s-pod-network.c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Workload="localhost-k8s-coredns--66bc5c9577--j799m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c77c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-j799m", "timestamp":"2025-10-29 00:44:52.484078704 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.484 [INFO][4726] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.646 [INFO][4726] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.646 [INFO][4726] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.707 [INFO][4726] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" host="localhost" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.714 [INFO][4726] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.722 [INFO][4726] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.727 [INFO][4726] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.731 [INFO][4726] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.731 [INFO][4726] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" host="localhost" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.734 [INFO][4726] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166 Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.741 [INFO][4726] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" host="localhost" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.751 [INFO][4726] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" host="localhost" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.751 [INFO][4726] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" host="localhost" Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.752 [INFO][4726] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 00:44:52.787019 containerd[1613]: 2025-10-29 00:44:52.752 [INFO][4726] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" HandleID="k8s-pod-network.c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Workload="localhost-k8s-coredns--66bc5c9577--j799m-eth0" Oct 29 00:44:52.787563 containerd[1613]: 2025-10-29 00:44:52.759 [INFO][4663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Namespace="kube-system" Pod="coredns-66bc5c9577-j799m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j799m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--j799m-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f8a0a8b3-e8df-4562-92d8-5506f107f647", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-j799m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia69b8b05cba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:52.787563 containerd[1613]: 2025-10-29 00:44:52.759 [INFO][4663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Namespace="kube-system" Pod="coredns-66bc5c9577-j799m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j799m-eth0" Oct 29 00:44:52.787563 containerd[1613]: 2025-10-29 00:44:52.759 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia69b8b05cba ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Namespace="kube-system" Pod="coredns-66bc5c9577-j799m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j799m-eth0" Oct 29 00:44:52.787563 containerd[1613]: 2025-10-29 00:44:52.766 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Namespace="kube-system" Pod="coredns-66bc5c9577-j799m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j799m-eth0" Oct 29 00:44:52.787563 containerd[1613]: 2025-10-29 00:44:52.766 [INFO][4663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Namespace="kube-system" Pod="coredns-66bc5c9577-j799m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j799m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--j799m-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f8a0a8b3-e8df-4562-92d8-5506f107f647", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166", Pod:"coredns-66bc5c9577-j799m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia69b8b05cba", MAC:"f6:c3:e4:30:cd:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 00:44:52.787563 containerd[1613]: 2025-10-29 00:44:52.779 [INFO][4663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" Namespace="kube-system" Pod="coredns-66bc5c9577-j799m" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j799m-eth0" Oct 29 00:44:52.800730 systemd[1]: Started cri-containerd-bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9.scope - libcontainer container bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9. Oct 29 00:44:52.808271 containerd[1613]: time="2025-10-29T00:44:52.808230186Z" level=info msg="connecting to shim c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166" address="unix:///run/containerd/s/22f70f242cfe4ba40af42665da5594a863ab07ff590e4dd6ef887d3b051fd01a" namespace=k8s.io protocol=ttrpc version=3 Oct 29 00:44:52.817271 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:44:52.831747 systemd[1]: Started cri-containerd-c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166.scope - libcontainer container c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166. Oct 29 00:44:52.848459 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:44:52.865883 containerd[1613]: time="2025-10-29T00:44:52.865810945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557bfcb88d-x5jf7,Uid:13741636-ed02-4263-9179-b37fa6d45218,Namespace:calico-system,Attempt:0,} returns sandbox id \"bca6b08758ee2faf68cb0e9e45e6a88355e1134ae52e6fbc453563ede3a40cf9\"" Oct 29 00:44:52.887295 containerd[1613]: time="2025-10-29T00:44:52.887247164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j799m,Uid:f8a0a8b3-e8df-4562-92d8-5506f107f647,Namespace:kube-system,Attempt:0,} returns sandbox id \"c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166\"" Oct 29 00:44:52.888267 kubelet[2761]: E1029 00:44:52.888233 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:52.892064 containerd[1613]: time="2025-10-29T00:44:52.892027294Z" level=info msg="CreateContainer within sandbox \"c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:44:52.907996 containerd[1613]: time="2025-10-29T00:44:52.907952371Z" level=info msg="Container 65f4ea460c7d3529409a9b41daaa4f442b4f87f756cdbc1ef4044d782cb2e93c: CDI devices from CRI Config.CDIDevices: []" Oct 29 00:44:52.913682 containerd[1613]: time="2025-10-29T00:44:52.913651487Z" level=info msg="CreateContainer within sandbox \"c86b2d132c74d3fe0622a39e10fc31fe051c8a853a9d5763408f5958ccec3166\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65f4ea460c7d3529409a9b41daaa4f442b4f87f756cdbc1ef4044d782cb2e93c\"" Oct 29 00:44:52.914222 containerd[1613]: time="2025-10-29T00:44:52.914176724Z" level=info msg="StartContainer for \"65f4ea460c7d3529409a9b41daaa4f442b4f87f756cdbc1ef4044d782cb2e93c\"" Oct 29 00:44:52.915117 containerd[1613]: time="2025-10-29T00:44:52.915093256Z" level=info msg="connecting to shim 65f4ea460c7d3529409a9b41daaa4f442b4f87f756cdbc1ef4044d782cb2e93c" address="unix:///run/containerd/s/22f70f242cfe4ba40af42665da5594a863ab07ff590e4dd6ef887d3b051fd01a" protocol=ttrpc version=3 Oct 29 00:44:52.939858 systemd[1]: Started cri-containerd-65f4ea460c7d3529409a9b41daaa4f442b4f87f756cdbc1ef4044d782cb2e93c.scope - libcontainer container 65f4ea460c7d3529409a9b41daaa4f442b4f87f756cdbc1ef4044d782cb2e93c. Oct 29 00:44:52.973613 containerd[1613]: time="2025-10-29T00:44:52.973540142Z" level=info msg="StartContainer for \"65f4ea460c7d3529409a9b41daaa4f442b4f87f756cdbc1ef4044d782cb2e93c\" returns successfully" Oct 29 00:44:53.047225 containerd[1613]: time="2025-10-29T00:44:53.047167216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:44:53.048352 containerd[1613]: time="2025-10-29T00:44:53.048299122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:44:53.048478 containerd[1613]: time="2025-10-29T00:44:53.048361219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:44:53.048561 kubelet[2761]: E1029 00:44:53.048527 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:44:53.048638 kubelet[2761]: E1029 00:44:53.048570 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:44:53.048846 kubelet[2761]: E1029 00:44:53.048820 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-nsdz2_calico-system(7c6565b5-13e1-473b-b977-3ab4cab19c9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:44:53.049220 containerd[1613]: time="2025-10-29T00:44:53.049199613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:44:53.375708 containerd[1613]: time="2025-10-29T00:44:53.375658303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:44:53.376921 containerd[1613]: time="2025-10-29T00:44:53.376866973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:44:53.376921 containerd[1613]: time="2025-10-29T00:44:53.376919863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:44:53.377107 kubelet[2761]: E1029 00:44:53.377063 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:44:53.377107 kubelet[2761]: E1029 00:44:53.377099 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:44:53.377337 kubelet[2761]: E1029 00:44:53.377306 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6d776986cb-mkzq4_calico-apiserver(a2ccb739-fac3-4c48-b3e6-829701cb5ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:44:53.377337 kubelet[2761]: E1029 00:44:53.377340 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" podUID="a2ccb739-fac3-4c48-b3e6-829701cb5ed8" Oct 29 00:44:53.377467 containerd[1613]: time="2025-10-29T00:44:53.377357785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:44:53.396818 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:52840.service - OpenSSH per-connection server daemon (10.0.0.1:52840). Oct 29 00:44:53.468362 sshd[4990]: Accepted publickey for core from 10.0.0.1 port 52840 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:44:53.469843 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:44:53.474652 systemd-logind[1578]: New session 9 of user core. Oct 29 00:44:53.483734 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 00:44:53.555700 kubelet[2761]: E1029 00:44:53.555664 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:53.557984 kubelet[2761]: E1029 00:44:53.557935 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" podUID="a2ccb739-fac3-4c48-b3e6-829701cb5ed8" Oct 29 00:44:53.575201 kubelet[2761]: I1029 00:44:53.573761 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-j799m" podStartSLOduration=39.573742508 podStartE2EDuration="39.573742508s" podCreationTimestamp="2025-10-29 00:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:44:53.572897671 +0000 UTC m=+45.386130290" watchObservedRunningTime="2025-10-29 00:44:53.573742508 +0000 UTC m=+45.386975127" Oct 29 00:44:53.651748 sshd[4993]: Connection closed by 10.0.0.1 port 52840 Oct 29 00:44:53.650819 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Oct 29 00:44:53.656748 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:52840.service: Deactivated successfully. Oct 29 00:44:53.659456 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 00:44:53.660388 systemd-logind[1578]: Session 9 logged out. Waiting for processes to exit. Oct 29 00:44:53.661897 systemd-logind[1578]: Removed session 9. Oct 29 00:44:53.751627 containerd[1613]: time="2025-10-29T00:44:53.751545288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:44:53.752637 containerd[1613]: time="2025-10-29T00:44:53.752568309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:44:53.752784 containerd[1613]: time="2025-10-29T00:44:53.752664851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:44:53.752875 kubelet[2761]: E1029 00:44:53.752824 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:44:53.752918 kubelet[2761]: E1029 00:44:53.752879 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:44:53.753167 kubelet[2761]: E1029 00:44:53.753120 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-557bfcb88d-x5jf7_calico-system(13741636-ed02-4263-9179-b37fa6d45218): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:44:53.753305 containerd[1613]: time="2025-10-29T00:44:53.753148579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:44:53.753345 kubelet[2761]: E1029 00:44:53.753186 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" podUID="13741636-ed02-4263-9179-b37fa6d45218" Oct 29 00:44:54.069653 containerd[1613]: time="2025-10-29T00:44:54.069612091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:44:54.130821 containerd[1613]: time="2025-10-29T00:44:54.130766950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:44:54.130885 containerd[1613]: time="2025-10-29T00:44:54.130808579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:44:54.131107 kubelet[2761]: E1029 00:44:54.131063 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:44:54.131177 kubelet[2761]: E1029 00:44:54.131111 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:44:54.131223 kubelet[2761]: E1029 00:44:54.131190 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-nsdz2_calico-system(7c6565b5-13e1-473b-b977-3ab4cab19c9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:44:54.131280 kubelet[2761]: E1029 00:44:54.131239 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nsdz2" podUID="7c6565b5-13e1-473b-b977-3ab4cab19c9a" Oct 29 00:44:54.263743 systemd-networkd[1502]: cali1c1d3fea121: Gained IPv6LL Oct 29 00:44:54.391784 systemd-networkd[1502]: cali364cff680e7: Gained IPv6LL Oct 29 00:44:54.563801 kubelet[2761]: E1029 00:44:54.563735 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:54.564510 kubelet[2761]: E1029 00:44:54.564461 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" podUID="13741636-ed02-4263-9179-b37fa6d45218" Oct 29 00:44:54.564720 kubelet[2761]: E1029 00:44:54.564554 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" podUID="a2ccb739-fac3-4c48-b3e6-829701cb5ed8" Oct 29 00:44:54.564854 kubelet[2761]: E1029 00:44:54.564813 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nsdz2" podUID="7c6565b5-13e1-473b-b977-3ab4cab19c9a" Oct 29 00:44:54.647775 systemd-networkd[1502]: cali68310fa3c0e: Gained IPv6LL Oct 29 00:44:54.775800 systemd-networkd[1502]: calia69b8b05cba: Gained IPv6LL Oct 29 00:44:55.565299 kubelet[2761]: E1029 00:44:55.565264 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:58.286958 containerd[1613]: time="2025-10-29T00:44:58.286894589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:44:58.657321 containerd[1613]: time="2025-10-29T00:44:58.657271025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:44:58.666035 containerd[1613]: time="2025-10-29T00:44:58.665967754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:44:58.666298 containerd[1613]: time="2025-10-29T00:44:58.666068883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:44:58.666492 kubelet[2761]: E1029 00:44:58.666429 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:44:58.666492 kubelet[2761]: E1029 00:44:58.666492 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:44:58.667014 kubelet[2761]: E1029 00:44:58.666615 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-787c59bf7c-jznwc_calico-system(8c66d588-d751-4050-9adc-78c4164786df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:44:58.667716 containerd[1613]: time="2025-10-29T00:44:58.667468601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:44:58.671683 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:52844.service - OpenSSH per-connection server daemon (10.0.0.1:52844). Oct 29 00:44:58.737424 sshd[5020]: Accepted publickey for core from 10.0.0.1 port 52844 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:44:58.739562 sshd-session[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:44:58.744168 systemd-logind[1578]: New session 10 of user core. Oct 29 00:44:58.748713 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 00:44:58.869797 sshd[5029]: Connection closed by 10.0.0.1 port 52844 Oct 29 00:44:58.870263 sshd-session[5020]: pam_unix(sshd:session): session closed for user core Oct 29 00:44:58.880336 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:52844.service: Deactivated successfully. Oct 29 00:44:58.882085 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 00:44:58.882916 systemd-logind[1578]: Session 10 logged out. Waiting for processes to exit. Oct 29 00:44:58.885674 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:52850.service - OpenSSH per-connection server daemon (10.0.0.1:52850). Oct 29 00:44:58.886480 systemd-logind[1578]: Removed session 10. Oct 29 00:44:58.944271 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 52850 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:44:58.945708 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:44:58.950304 systemd-logind[1578]: New session 11 of user core. Oct 29 00:44:58.956725 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 00:44:59.010483 containerd[1613]: time="2025-10-29T00:44:59.010416808Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:44:59.011842 containerd[1613]: time="2025-10-29T00:44:59.011715908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:44:59.011842 containerd[1613]: time="2025-10-29T00:44:59.011792992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:44:59.012590 kubelet[2761]: E1029 00:44:59.012102 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:44:59.012590 kubelet[2761]: E1029 00:44:59.012154 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:44:59.012590 kubelet[2761]: E1029 00:44:59.012242 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-787c59bf7c-jznwc_calico-system(8c66d588-d751-4050-9adc-78c4164786df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:44:59.012745 kubelet[2761]: E1029 00:44:59.012283 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787c59bf7c-jznwc" podUID="8c66d588-d751-4050-9adc-78c4164786df" Oct 29 00:44:59.112815 sshd[5046]: Connection closed by 10.0.0.1 port 52850 Oct 29 00:44:59.113874 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Oct 29 00:44:59.128906 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:52850.service: Deactivated successfully. Oct 29 00:44:59.132500 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 00:44:59.134772 systemd-logind[1578]: Session 11 logged out. Waiting for processes to exit. Oct 29 00:44:59.137268 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:52852.service - OpenSSH per-connection server daemon (10.0.0.1:52852). Oct 29 00:44:59.139402 systemd-logind[1578]: Removed session 11. Oct 29 00:44:59.198420 sshd[5058]: Accepted publickey for core from 10.0.0.1 port 52852 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:44:59.200147 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:44:59.206809 systemd-logind[1578]: New session 12 of user core. Oct 29 00:44:59.212799 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 00:44:59.464062 sshd[5061]: Connection closed by 10.0.0.1 port 52852 Oct 29 00:44:59.464370 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Oct 29 00:44:59.468986 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:52852.service: Deactivated successfully. Oct 29 00:44:59.471344 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 00:44:59.473218 systemd-logind[1578]: Session 12 logged out. Waiting for processes to exit. Oct 29 00:44:59.475136 systemd-logind[1578]: Removed session 12. Oct 29 00:45:03.286214 containerd[1613]: time="2025-10-29T00:45:03.286171841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:45:03.642907 containerd[1613]: time="2025-10-29T00:45:03.642842340Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:03.644041 containerd[1613]: time="2025-10-29T00:45:03.643998660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:45:03.644105 containerd[1613]: time="2025-10-29T00:45:03.644083259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:45:03.645759 kubelet[2761]: E1029 00:45:03.645716 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:45:03.646261 kubelet[2761]: E1029 00:45:03.645769 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:45:03.646261 kubelet[2761]: E1029 00:45:03.645836 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jzb5c_calico-system(225130ab-318f-4752-a3f6-b2cc6751e084): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:03.646261 kubelet[2761]: E1029 00:45:03.645865 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jzb5c" podUID="225130ab-318f-4752-a3f6-b2cc6751e084" Oct 29 00:45:04.286149 containerd[1613]: time="2025-10-29T00:45:04.286044632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:45:04.484829 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:44472.service - OpenSSH per-connection server daemon (10.0.0.1:44472). Oct 29 00:45:04.543667 sshd[5083]: Accepted publickey for core from 10.0.0.1 port 44472 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:04.545228 sshd-session[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:04.550236 systemd-logind[1578]: New session 13 of user core. Oct 29 00:45:04.558747 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 00:45:04.605605 containerd[1613]: time="2025-10-29T00:45:04.605547407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:04.606824 containerd[1613]: time="2025-10-29T00:45:04.606733193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:45:04.606824 containerd[1613]: time="2025-10-29T00:45:04.606769691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:45:04.606964 kubelet[2761]: E1029 00:45:04.606922 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:45:04.607058 kubelet[2761]: E1029 00:45:04.606972 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:45:04.607326 kubelet[2761]: E1029 00:45:04.607061 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6d776986cb-zqjs2_calico-apiserver(fc5d07a9-e55f-4ffc-bf62-2522098f23ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:04.607326 kubelet[2761]: E1029 00:45:04.607096 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" podUID="fc5d07a9-e55f-4ffc-bf62-2522098f23ea" Oct 29 00:45:04.672947 sshd[5086]: Connection closed by 10.0.0.1 port 44472 Oct 29 00:45:04.673265 sshd-session[5083]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:04.678285 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:44472.service: Deactivated successfully. Oct 29 00:45:04.680240 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 00:45:04.681010 systemd-logind[1578]: Session 13 logged out. Waiting for processes to exit. Oct 29 00:45:04.682215 systemd-logind[1578]: Removed session 13. Oct 29 00:45:06.289175 containerd[1613]: time="2025-10-29T00:45:06.289099177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:45:06.651744 containerd[1613]: time="2025-10-29T00:45:06.651595830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:06.652995 containerd[1613]: time="2025-10-29T00:45:06.652932789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:45:06.652995 containerd[1613]: time="2025-10-29T00:45:06.652982853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:45:06.653312 kubelet[2761]: E1029 00:45:06.653227 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:45:06.653748 kubelet[2761]: E1029 00:45:06.653320 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:45:06.653748 kubelet[2761]: E1029 00:45:06.653458 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-557bfcb88d-x5jf7_calico-system(13741636-ed02-4263-9179-b37fa6d45218): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:06.653748 kubelet[2761]: E1029 00:45:06.653515 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" podUID="13741636-ed02-4263-9179-b37fa6d45218" Oct 29 00:45:07.287633 containerd[1613]: time="2025-10-29T00:45:07.287590421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:45:07.604908 containerd[1613]: time="2025-10-29T00:45:07.604863869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:07.606184 containerd[1613]: time="2025-10-29T00:45:07.606121879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:45:07.606297 containerd[1613]: time="2025-10-29T00:45:07.606136888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:45:07.606433 kubelet[2761]: E1029 00:45:07.606340 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:45:07.606433 kubelet[2761]: E1029 00:45:07.606380 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:45:07.606506 kubelet[2761]: E1029 00:45:07.606456 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6d776986cb-mkzq4_calico-apiserver(a2ccb739-fac3-4c48-b3e6-829701cb5ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:07.606506 kubelet[2761]: E1029 00:45:07.606485 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" podUID="a2ccb739-fac3-4c48-b3e6-829701cb5ed8" Oct 29 00:45:09.285468 containerd[1613]: time="2025-10-29T00:45:09.285421667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:45:09.649938 containerd[1613]: time="2025-10-29T00:45:09.649809646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:09.651456 containerd[1613]: time="2025-10-29T00:45:09.651389550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:45:09.651639 containerd[1613]: time="2025-10-29T00:45:09.651493505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:45:09.651713 kubelet[2761]: E1029 00:45:09.651658 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:45:09.651713 kubelet[2761]: E1029 00:45:09.651700 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:45:09.652291 kubelet[2761]: E1029 00:45:09.652254 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-nsdz2_calico-system(7c6565b5-13e1-473b-b977-3ab4cab19c9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:09.653203 containerd[1613]: time="2025-10-29T00:45:09.653172967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:45:09.697951 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:44488.service - OpenSSH per-connection server daemon (10.0.0.1:44488). Oct 29 00:45:09.753510 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 44488 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:09.754896 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:09.759615 systemd-logind[1578]: New session 14 of user core. Oct 29 00:45:09.774713 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 00:45:09.894757 sshd[5110]: Connection closed by 10.0.0.1 port 44488 Oct 29 00:45:09.895253 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:09.900386 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:44488.service: Deactivated successfully. Oct 29 00:45:09.902856 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 00:45:09.903779 systemd-logind[1578]: Session 14 logged out. Waiting for processes to exit. Oct 29 00:45:09.905345 systemd-logind[1578]: Removed session 14. Oct 29 00:45:09.996936 containerd[1613]: time="2025-10-29T00:45:09.996889607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:09.998058 containerd[1613]: time="2025-10-29T00:45:09.997997045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:45:09.998058 containerd[1613]: time="2025-10-29T00:45:09.998037140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:45:09.998237 kubelet[2761]: E1029 00:45:09.998192 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:45:09.998237 kubelet[2761]: E1029 00:45:09.998235 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:45:09.998351 kubelet[2761]: E1029 00:45:09.998311 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-nsdz2_calico-system(7c6565b5-13e1-473b-b977-3ab4cab19c9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:09.998416 kubelet[2761]: E1029 00:45:09.998349 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nsdz2" podUID="7c6565b5-13e1-473b-b977-3ab4cab19c9a" Oct 29 00:45:12.287056 kubelet[2761]: E1029 00:45:12.286944 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787c59bf7c-jznwc" podUID="8c66d588-d751-4050-9adc-78c4164786df" Oct 29 00:45:14.912195 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:44854.service - OpenSSH per-connection server daemon (10.0.0.1:44854). Oct 29 00:45:14.996078 sshd[5126]: Accepted publickey for core from 10.0.0.1 port 44854 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:14.997721 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:15.003045 systemd-logind[1578]: New session 15 of user core. Oct 29 00:45:15.011760 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 00:45:15.140073 sshd[5129]: Connection closed by 10.0.0.1 port 44854 Oct 29 00:45:15.140414 sshd-session[5126]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:15.144777 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:44854.service: Deactivated successfully. Oct 29 00:45:15.146926 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 00:45:15.147771 systemd-logind[1578]: Session 15 logged out. Waiting for processes to exit. Oct 29 00:45:15.149239 systemd-logind[1578]: Removed session 15. Oct 29 00:45:16.285813 kubelet[2761]: E1029 00:45:16.285708 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jzb5c" podUID="225130ab-318f-4752-a3f6-b2cc6751e084" Oct 29 00:45:19.285807 kubelet[2761]: E1029 00:45:19.285680 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" podUID="13741636-ed02-4263-9179-b37fa6d45218" Oct 29 00:45:19.285807 kubelet[2761]: E1029 00:45:19.285784 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" podUID="fc5d07a9-e55f-4ffc-bf62-2522098f23ea" Oct 29 00:45:20.155548 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:50904.service - OpenSSH per-connection server daemon (10.0.0.1:50904). Oct 29 00:45:20.211767 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 50904 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:20.213272 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:20.218377 systemd-logind[1578]: New session 16 of user core. Oct 29 00:45:20.227721 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 00:45:20.353384 sshd[5149]: Connection closed by 10.0.0.1 port 50904 Oct 29 00:45:20.353735 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:20.358953 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:50904.service: Deactivated successfully. Oct 29 00:45:20.361385 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 00:45:20.362463 systemd-logind[1578]: Session 16 logged out. Waiting for processes to exit. Oct 29 00:45:20.363938 systemd-logind[1578]: Removed session 16. Oct 29 00:45:21.285969 kubelet[2761]: E1029 00:45:21.285415 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:45:21.297105 containerd[1613]: time="2025-10-29T00:45:21.297061849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a9874c8e012706d55abea68370e001a4851b8e4e3be9a66ff3f656fa47e379b\" id:\"94a505db6a7adad5ae0b6460f40e5e64fe3b5a6717a0305b3ad7e8c33e4f7495\" pid:5174 exited_at:{seconds:1761698721 nanos:296763435}" Oct 29 00:45:22.285485 kubelet[2761]: E1029 00:45:22.285254 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" podUID="a2ccb739-fac3-4c48-b3e6-829701cb5ed8" Oct 29 00:45:24.285832 kubelet[2761]: E1029 00:45:24.285757 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nsdz2" podUID="7c6565b5-13e1-473b-b977-3ab4cab19c9a" Oct 29 00:45:25.366740 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:50916.service - OpenSSH per-connection server daemon (10.0.0.1:50916). Oct 29 00:45:25.436382 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 50916 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:25.438210 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:25.442646 systemd-logind[1578]: New session 17 of user core. Oct 29 00:45:25.445715 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 00:45:25.587170 sshd[5191]: Connection closed by 10.0.0.1 port 50916 Oct 29 00:45:25.589648 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:25.597703 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:50916.service: Deactivated successfully. Oct 29 00:45:25.600552 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 00:45:25.603044 systemd-logind[1578]: Session 17 logged out. Waiting for processes to exit. Oct 29 00:45:25.605785 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:50926.service - OpenSSH per-connection server daemon (10.0.0.1:50926). Oct 29 00:45:25.608113 systemd-logind[1578]: Removed session 17. Oct 29 00:45:25.656695 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 50926 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:25.658387 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:25.663085 systemd-logind[1578]: New session 18 of user core. Oct 29 00:45:25.674735 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 00:45:25.962590 sshd[5207]: Connection closed by 10.0.0.1 port 50926 Oct 29 00:45:25.963215 sshd-session[5204]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:25.974625 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:50926.service: Deactivated successfully. Oct 29 00:45:25.976721 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 00:45:25.977672 systemd-logind[1578]: Session 18 logged out. Waiting for processes to exit. Oct 29 00:45:25.980604 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:50936.service - OpenSSH per-connection server daemon (10.0.0.1:50936). Oct 29 00:45:25.981284 systemd-logind[1578]: Removed session 18. Oct 29 00:45:26.056212 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 50936 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:26.058125 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:26.063363 systemd-logind[1578]: New session 19 of user core. Oct 29 00:45:26.077757 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 00:45:26.793781 sshd[5222]: Connection closed by 10.0.0.1 port 50936 Oct 29 00:45:26.794093 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:26.807498 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:50936.service: Deactivated successfully. Oct 29 00:45:26.810512 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 00:45:26.811380 systemd-logind[1578]: Session 19 logged out. Waiting for processes to exit. Oct 29 00:45:26.816921 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:50950.service - OpenSSH per-connection server daemon (10.0.0.1:50950). Oct 29 00:45:26.817773 systemd-logind[1578]: Removed session 19. Oct 29 00:45:26.867513 sshd[5240]: Accepted publickey for core from 10.0.0.1 port 50950 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:26.869443 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:26.874103 systemd-logind[1578]: New session 20 of user core. Oct 29 00:45:26.881728 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 00:45:27.108188 sshd[5243]: Connection closed by 10.0.0.1 port 50950 Oct 29 00:45:27.110051 sshd-session[5240]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:27.120801 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:50950.service: Deactivated successfully. Oct 29 00:45:27.123596 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 00:45:27.124689 systemd-logind[1578]: Session 20 logged out. Waiting for processes to exit. Oct 29 00:45:27.128258 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:50956.service - OpenSSH per-connection server daemon (10.0.0.1:50956). Oct 29 00:45:27.128958 systemd-logind[1578]: Removed session 20. Oct 29 00:45:27.183434 sshd[5254]: Accepted publickey for core from 10.0.0.1 port 50956 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:27.185145 sshd-session[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:27.190217 systemd-logind[1578]: New session 21 of user core. Oct 29 00:45:27.207792 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 29 00:45:27.285604 kubelet[2761]: E1029 00:45:27.285550 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:45:27.286614 containerd[1613]: time="2025-10-29T00:45:27.286507577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 00:45:27.331441 sshd[5257]: Connection closed by 10.0.0.1 port 50956 Oct 29 00:45:27.331812 sshd-session[5254]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:27.335904 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:50956.service: Deactivated successfully. Oct 29 00:45:27.338192 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 00:45:27.339933 systemd-logind[1578]: Session 21 logged out. Waiting for processes to exit. Oct 29 00:45:27.341259 systemd-logind[1578]: Removed session 21. Oct 29 00:45:27.631723 containerd[1613]: time="2025-10-29T00:45:27.631675086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:27.632715 containerd[1613]: time="2025-10-29T00:45:27.632684774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 00:45:27.632781 containerd[1613]: time="2025-10-29T00:45:27.632753797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 00:45:27.632943 kubelet[2761]: E1029 00:45:27.632894 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:45:27.632943 kubelet[2761]: E1029 00:45:27.632939 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 00:45:27.633029 kubelet[2761]: E1029 00:45:27.633017 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-787c59bf7c-jznwc_calico-system(8c66d588-d751-4050-9adc-78c4164786df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:27.634614 containerd[1613]: time="2025-10-29T00:45:27.634550966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 00:45:28.131012 containerd[1613]: time="2025-10-29T00:45:28.130931885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:28.132101 containerd[1613]: time="2025-10-29T00:45:28.132051673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 00:45:28.132101 containerd[1613]: time="2025-10-29T00:45:28.132080898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 00:45:28.132333 kubelet[2761]: E1029 00:45:28.132280 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:45:28.132333 kubelet[2761]: E1029 00:45:28.132329 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 00:45:28.132457 kubelet[2761]: E1029 00:45:28.132407 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-787c59bf7c-jznwc_calico-system(8c66d588-d751-4050-9adc-78c4164786df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:28.132486 kubelet[2761]: E1029 00:45:28.132446 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787c59bf7c-jznwc" podUID="8c66d588-d751-4050-9adc-78c4164786df" Oct 29 00:45:28.286878 containerd[1613]: time="2025-10-29T00:45:28.286711228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 00:45:28.678716 containerd[1613]: time="2025-10-29T00:45:28.678656652Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:28.679810 containerd[1613]: time="2025-10-29T00:45:28.679757053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 00:45:28.679977 containerd[1613]: time="2025-10-29T00:45:28.679807509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 00:45:28.680039 kubelet[2761]: E1029 00:45:28.679986 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:45:28.680362 kubelet[2761]: E1029 00:45:28.680047 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 00:45:28.680362 kubelet[2761]: E1029 00:45:28.680154 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jzb5c_calico-system(225130ab-318f-4752-a3f6-b2cc6751e084): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:28.680362 kubelet[2761]: E1029 00:45:28.680195 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jzb5c" podUID="225130ab-318f-4752-a3f6-b2cc6751e084" Oct 29 00:45:32.286283 containerd[1613]: time="2025-10-29T00:45:32.286144137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 00:45:32.347145 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:35890.service - OpenSSH per-connection server daemon (10.0.0.1:35890). Oct 29 00:45:32.419388 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 35890 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:32.421284 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:32.425660 systemd-logind[1578]: New session 22 of user core. Oct 29 00:45:32.435736 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 29 00:45:32.553302 sshd[5283]: Connection closed by 10.0.0.1 port 35890 Oct 29 00:45:32.553690 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:32.558738 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:35890.service: Deactivated successfully. Oct 29 00:45:32.560906 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 00:45:32.561971 systemd-logind[1578]: Session 22 logged out. Waiting for processes to exit. Oct 29 00:45:32.563446 systemd-logind[1578]: Removed session 22. Oct 29 00:45:32.675700 containerd[1613]: time="2025-10-29T00:45:32.675649704Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:32.799869 containerd[1613]: time="2025-10-29T00:45:32.799802567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 00:45:32.799948 containerd[1613]: time="2025-10-29T00:45:32.799815853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 00:45:32.800164 kubelet[2761]: E1029 00:45:32.800122 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:45:32.800164 kubelet[2761]: E1029 00:45:32.800163 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 00:45:32.800672 kubelet[2761]: E1029 00:45:32.800240 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-557bfcb88d-x5jf7_calico-system(13741636-ed02-4263-9179-b37fa6d45218): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:32.800672 kubelet[2761]: E1029 00:45:32.800270 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" podUID="13741636-ed02-4263-9179-b37fa6d45218" Oct 29 00:45:33.286045 containerd[1613]: time="2025-10-29T00:45:33.285791750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:45:33.601788 containerd[1613]: time="2025-10-29T00:45:33.601738530Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:33.602957 containerd[1613]: time="2025-10-29T00:45:33.602919539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:45:33.603012 containerd[1613]: time="2025-10-29T00:45:33.602932013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:45:33.603204 kubelet[2761]: E1029 00:45:33.603155 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:45:33.603204 kubelet[2761]: E1029 00:45:33.603197 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:45:33.603293 kubelet[2761]: E1029 00:45:33.603270 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6d776986cb-zqjs2_calico-apiserver(fc5d07a9-e55f-4ffc-bf62-2522098f23ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:33.603321 kubelet[2761]: E1029 00:45:33.603302 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" podUID="fc5d07a9-e55f-4ffc-bf62-2522098f23ea" Oct 29 00:45:36.287971 containerd[1613]: time="2025-10-29T00:45:36.287676746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 00:45:36.662611 containerd[1613]: time="2025-10-29T00:45:36.662522908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:36.663609 containerd[1613]: time="2025-10-29T00:45:36.663522859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 00:45:36.663609 containerd[1613]: time="2025-10-29T00:45:36.663559338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 00:45:36.663810 kubelet[2761]: E1029 00:45:36.663736 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:45:36.663810 kubelet[2761]: E1029 00:45:36.663778 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 00:45:36.664204 kubelet[2761]: E1029 00:45:36.663854 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6d776986cb-mkzq4_calico-apiserver(a2ccb739-fac3-4c48-b3e6-829701cb5ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:36.664204 kubelet[2761]: E1029 00:45:36.663885 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-mkzq4" podUID="a2ccb739-fac3-4c48-b3e6-829701cb5ed8" Oct 29 00:45:37.572339 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:35894.service - OpenSSH per-connection server daemon (10.0.0.1:35894). Oct 29 00:45:37.624723 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 35894 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:37.626921 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:37.632001 systemd-logind[1578]: New session 23 of user core. Oct 29 00:45:37.640762 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 29 00:45:37.755435 sshd[5301]: Connection closed by 10.0.0.1 port 35894 Oct 29 00:45:37.755988 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:37.761088 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:35894.service: Deactivated successfully. Oct 29 00:45:37.763390 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 00:45:37.764292 systemd-logind[1578]: Session 23 logged out. Waiting for processes to exit. Oct 29 00:45:37.765509 systemd-logind[1578]: Removed session 23. Oct 29 00:45:39.284745 kubelet[2761]: E1029 00:45:39.284643 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:45:39.287054 containerd[1613]: time="2025-10-29T00:45:39.287006038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 00:45:39.697232 containerd[1613]: time="2025-10-29T00:45:39.697178016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:39.698296 containerd[1613]: time="2025-10-29T00:45:39.698258397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 00:45:39.698296 containerd[1613]: time="2025-10-29T00:45:39.698282343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 00:45:39.698468 kubelet[2761]: E1029 00:45:39.698431 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:45:39.698522 kubelet[2761]: E1029 00:45:39.698476 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 00:45:39.698617 kubelet[2761]: E1029 00:45:39.698555 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-nsdz2_calico-system(7c6565b5-13e1-473b-b977-3ab4cab19c9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:39.699343 containerd[1613]: time="2025-10-29T00:45:39.699313179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 00:45:40.048739 containerd[1613]: time="2025-10-29T00:45:40.048679961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 00:45:40.171066 containerd[1613]: time="2025-10-29T00:45:40.170994798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 00:45:40.171193 containerd[1613]: time="2025-10-29T00:45:40.171059512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 00:45:40.171247 kubelet[2761]: E1029 00:45:40.171220 2761 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:45:40.171287 kubelet[2761]: E1029 00:45:40.171261 2761 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 00:45:40.171359 kubelet[2761]: E1029 00:45:40.171331 2761 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-nsdz2_calico-system(7c6565b5-13e1-473b-b977-3ab4cab19c9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 00:45:40.171481 kubelet[2761]: E1029 00:45:40.171373 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nsdz2" podUID="7c6565b5-13e1-473b-b977-3ab4cab19c9a" Oct 29 00:45:40.285337 kubelet[2761]: E1029 00:45:40.284783 2761 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:45:40.285837 kubelet[2761]: E1029 00:45:40.285798 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-787c59bf7c-jznwc" podUID="8c66d588-d751-4050-9adc-78c4164786df" Oct 29 00:45:41.288042 kubelet[2761]: E1029 00:45:41.287386 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jzb5c" podUID="225130ab-318f-4752-a3f6-b2cc6751e084" Oct 29 00:45:42.772740 systemd[1]: Started sshd@23-10.0.0.95:22-10.0.0.1:42590.service - OpenSSH per-connection server daemon (10.0.0.1:42590). Oct 29 00:45:42.834035 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 42590 ssh2: RSA SHA256:s8tPwnTXOeMVzisbNqqCPwj2+lnJNXB3KVszA1vES1U Oct 29 00:45:42.835278 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 00:45:42.839823 systemd-logind[1578]: New session 24 of user core. Oct 29 00:45:42.850733 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 29 00:45:42.973171 sshd[5317]: Connection closed by 10.0.0.1 port 42590 Oct 29 00:45:42.973483 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Oct 29 00:45:42.978142 systemd[1]: sshd@23-10.0.0.95:22-10.0.0.1:42590.service: Deactivated successfully. Oct 29 00:45:42.980265 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 00:45:42.981303 systemd-logind[1578]: Session 24 logged out. Waiting for processes to exit. Oct 29 00:45:42.982628 systemd-logind[1578]: Removed session 24. Oct 29 00:45:44.285472 kubelet[2761]: E1029 00:45:44.285338 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d776986cb-zqjs2" podUID="fc5d07a9-e55f-4ffc-bf62-2522098f23ea" Oct 29 00:45:44.286435 kubelet[2761]: E1029 00:45:44.285987 2761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557bfcb88d-x5jf7" podUID="13741636-ed02-4263-9179-b37fa6d45218"