Nov 1 00:25:35.410386 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:25:35.410411 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:25:35.410423 kernel: BIOS-provided physical RAM map: Nov 1 00:25:35.410429 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:25:35.410435 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 00:25:35.410442 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 00:25:35.410449 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 00:25:35.410457 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 00:25:35.410465 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 1 00:25:35.410472 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 1 00:25:35.410482 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 1 00:25:35.410489 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 1 00:25:35.410498 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 1 00:25:35.410505 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 1 00:25:35.410513 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 1 00:25:35.410531 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 00:25:35.410541 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 1 00:25:35.410548 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 1 00:25:35.410555 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 00:25:35.410562 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:25:35.410569 kernel: NX (Execute Disable) protection: active Nov 1 00:25:35.410576 kernel: APIC: Static calls initialized Nov 1 00:25:35.410583 kernel: efi: EFI v2.7 by EDK II Nov 1 00:25:35.410590 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Nov 1 00:25:35.410597 kernel: SMBIOS 2.8 present. Nov 1 00:25:35.410604 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 1 00:25:35.410611 kernel: Hypervisor detected: KVM Nov 1 00:25:35.410620 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:25:35.410627 kernel: kvm-clock: using sched offset of 8223692610 cycles Nov 1 00:25:35.410635 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:25:35.410642 kernel: tsc: Detected 2794.750 MHz processor Nov 1 00:25:35.410649 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:25:35.410657 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:25:35.410664 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 1 00:25:35.410671 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 1 00:25:35.410678 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:25:35.410688 kernel: Using GB pages for direct mapping Nov 1 00:25:35.410695 kernel: Secure boot disabled Nov 1 00:25:35.410702 kernel: ACPI: Early table checksum verification disabled Nov 1 00:25:35.410709 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 1 00:25:35.410720 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 1 00:25:35.410728 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:35.410735 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:35.410745 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 1 00:25:35.410752 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:35.410763 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:35.410771 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:35.410778 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:35.410785 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 1 00:25:35.410793 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 1 00:25:35.410803 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 1 00:25:35.410810 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 1 00:25:35.410817 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 1 00:25:35.410825 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 1 00:25:35.410832 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 1 00:25:35.410839 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 1 00:25:35.410857 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 1 00:25:35.410866 kernel: No NUMA configuration found Nov 1 00:25:35.410876 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 1 00:25:35.410887 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 1 00:25:35.410895 kernel: Zone ranges: Nov 1 00:25:35.410902 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:25:35.410910 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 1 00:25:35.410917 kernel: Normal empty Nov 1 00:25:35.410925 kernel: Movable zone start for each node Nov 1 00:25:35.410932 kernel: Early memory node ranges Nov 1 00:25:35.410939 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:25:35.410946 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 1 00:25:35.410954 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 1 00:25:35.410964 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 1 00:25:35.410971 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 1 00:25:35.410978 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 1 00:25:35.410986 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 1 00:25:35.410993 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:25:35.411000 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:25:35.411008 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 1 00:25:35.411015 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:25:35.411022 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 1 00:25:35.411032 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 1 00:25:35.411040 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 1 00:25:35.411047 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:25:35.411055 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:25:35.411062 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:25:35.411069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:25:35.411077 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:25:35.411084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:25:35.411091 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:25:35.411101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:25:35.411109 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:25:35.411116 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:25:35.411123 kernel: TSC deadline timer available Nov 1 00:25:35.411131 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 1 00:25:35.411138 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:25:35.411145 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:25:35.411153 kernel: kvm-guest: setup PV sched yield Nov 1 00:25:35.411160 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 1 00:25:35.411170 kernel: Booting paravirtualized kernel on KVM Nov 1 00:25:35.411178 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:25:35.411185 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:25:35.411193 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 1 00:25:35.411200 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 1 00:25:35.411207 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:25:35.411215 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:25:35.411222 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:25:35.411231 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:25:35.411243 kernel: random: crng init done Nov 1 00:25:35.411251 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:25:35.411265 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:25:35.411273 kernel: Fallback order for Node 0: 0 Nov 1 00:25:35.411281 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 1 00:25:35.411288 kernel: Policy zone: DMA32 Nov 1 00:25:35.411296 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:25:35.411303 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 166140K reserved, 0K cma-reserved) Nov 1 00:25:35.411313 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:25:35.411321 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:25:35.411328 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:25:35.411335 kernel: Dynamic Preempt: voluntary Nov 1 00:25:35.411343 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:25:35.411359 kernel: rcu: RCU event tracing is enabled. Nov 1 00:25:35.411370 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:25:35.411377 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:25:35.411385 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:25:35.411393 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:25:35.411401 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:25:35.411408 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:25:35.411419 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:25:35.411426 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:25:35.411434 kernel: Console: colour dummy device 80x25 Nov 1 00:25:35.411442 kernel: printk: console [ttyS0] enabled Nov 1 00:25:35.411449 kernel: ACPI: Core revision 20230628 Nov 1 00:25:35.411460 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:25:35.411468 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:25:35.411475 kernel: x2apic enabled Nov 1 00:25:35.411483 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:25:35.411491 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:25:35.411498 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:25:35.411506 kernel: kvm-guest: setup PV IPIs Nov 1 00:25:35.411514 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:25:35.411537 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:25:35.411548 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 1 00:25:35.411555 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:25:35.411563 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:25:35.411571 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:25:35.411579 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:25:35.411586 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:25:35.411594 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:25:35.411602 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:25:35.411609 kernel: active return thunk: retbleed_return_thunk Nov 1 00:25:35.411620 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:25:35.411627 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:25:35.411635 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:25:35.411643 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:25:35.411654 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:25:35.411662 kernel: active return thunk: srso_return_thunk Nov 1 00:25:35.411669 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:25:35.411677 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:25:35.411687 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:25:35.411695 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:25:35.411703 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:25:35.411710 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 00:25:35.411718 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:25:35.411726 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:25:35.411734 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:25:35.411741 kernel: landlock: Up and running. Nov 1 00:25:35.411749 kernel: SELinux: Initializing. Nov 1 00:25:35.411759 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:25:35.411767 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:25:35.411775 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:25:35.411783 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:25:35.411790 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:25:35.411798 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:25:35.411806 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:25:35.411814 kernel: ... version: 0 Nov 1 00:25:35.411821 kernel: ... bit width: 48 Nov 1 00:25:35.411831 kernel: ... generic registers: 6 Nov 1 00:25:35.411839 kernel: ... value mask: 0000ffffffffffff Nov 1 00:25:35.411846 kernel: ... max period: 00007fffffffffff Nov 1 00:25:35.411854 kernel: ... fixed-purpose events: 0 Nov 1 00:25:35.411862 kernel: ... event mask: 000000000000003f Nov 1 00:25:35.411869 kernel: signal: max sigframe size: 1776 Nov 1 00:25:35.411877 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:25:35.411885 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:25:35.411892 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:25:35.411903 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:25:35.411910 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 00:25:35.411918 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:25:35.411926 kernel: smpboot: Max logical packages: 1 Nov 1 00:25:35.411933 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 1 00:25:35.411941 kernel: devtmpfs: initialized Nov 1 00:25:35.411949 kernel: x86/mm: Memory block size: 128MB Nov 1 00:25:35.411957 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 1 00:25:35.411965 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 1 00:25:35.411972 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 1 00:25:35.411983 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 1 00:25:35.411990 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 1 00:25:35.411998 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:25:35.412006 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:25:35.412014 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:25:35.412021 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:25:35.412029 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:25:35.412037 kernel: audit: type=2000 audit(1761956732.037:1): state=initialized audit_enabled=0 res=1 Nov 1 00:25:35.412047 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:25:35.412055 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:25:35.412063 kernel: cpuidle: using governor menu Nov 1 00:25:35.412071 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:25:35.412079 kernel: dca service started, version 1.12.1 Nov 1 00:25:35.412086 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:25:35.412094 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:25:35.412102 kernel: PCI: Using configuration type 1 for base access Nov 1 00:25:35.412110 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:25:35.412120 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:25:35.412128 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:25:35.412136 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:25:35.412143 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:25:35.412151 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:25:35.412159 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:25:35.412167 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:25:35.412176 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:25:35.412183 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:25:35.412194 kernel: ACPI: Interpreter enabled Nov 1 00:25:35.412201 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:25:35.412209 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:25:35.412217 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:25:35.412225 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:25:35.412232 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:25:35.412240 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:25:35.412473 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:25:35.412631 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:25:35.412769 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:25:35.412780 kernel: PCI host bridge to bus 0000:00 Nov 1 00:25:35.412931 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:25:35.413050 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:25:35.413167 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:25:35.413294 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 00:25:35.413416 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:25:35.413547 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 1 00:25:35.413667 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:25:35.413822 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:25:35.413970 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:25:35.414107 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 1 00:25:35.414243 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 1 00:25:35.414380 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 1 00:25:35.414512 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 1 00:25:35.414657 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:25:35.414800 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:25:35.414928 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 1 00:25:35.415057 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 1 00:25:35.415191 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 1 00:25:35.415348 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:25:35.415478 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 1 00:25:35.415631 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 1 00:25:35.415761 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 1 00:25:35.415939 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:25:35.416071 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 1 00:25:35.416205 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 1 00:25:35.416345 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 1 00:25:35.416475 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 1 00:25:35.416634 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:25:35.416765 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:25:35.416910 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:25:35.417039 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 1 00:25:35.417173 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 1 00:25:35.417325 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:25:35.417465 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 1 00:25:35.417476 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:25:35.417485 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:25:35.417493 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:25:35.417500 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:25:35.417508 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:25:35.417533 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:25:35.417541 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:25:35.417549 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:25:35.417556 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:25:35.417564 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:25:35.417572 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:25:35.417580 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:25:35.417587 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:25:35.417598 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:25:35.417606 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:25:35.417614 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:25:35.417622 kernel: iommu: Default domain type: Translated Nov 1 00:25:35.417629 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:25:35.417637 kernel: efivars: Registered efivars operations Nov 1 00:25:35.417645 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:25:35.417653 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:25:35.417660 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 1 00:25:35.417668 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 1 00:25:35.417678 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 1 00:25:35.417686 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 1 00:25:35.417818 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:25:35.417947 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:25:35.418075 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:25:35.418085 kernel: vgaarb: loaded Nov 1 00:25:35.418093 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:25:35.418101 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:25:35.418113 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:25:35.418121 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:25:35.418129 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:25:35.418136 kernel: pnp: PnP ACPI init Nov 1 00:25:35.418292 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:25:35.418304 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:25:35.418312 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:25:35.418320 kernel: NET: Registered PF_INET protocol family Nov 1 00:25:35.418328 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:25:35.418339 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:25:35.418347 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:25:35.418355 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:25:35.418363 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:25:35.418371 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:25:35.418379 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:25:35.418387 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:25:35.418395 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:25:35.418405 kernel: NET: Registered PF_XDP protocol family Nov 1 00:25:35.418580 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 1 00:25:35.418710 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 1 00:25:35.418828 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:25:35.418945 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:25:35.419061 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:25:35.419177 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 00:25:35.419301 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:25:35.419423 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 1 00:25:35.419434 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:25:35.419442 kernel: Initialise system trusted keyrings Nov 1 00:25:35.419449 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:25:35.419457 kernel: Key type asymmetric registered Nov 1 00:25:35.419465 kernel: Asymmetric key parser 'x509' registered Nov 1 00:25:35.419473 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:25:35.419481 kernel: io scheduler mq-deadline registered Nov 1 00:25:35.419488 kernel: io scheduler kyber registered Nov 1 00:25:35.419499 kernel: io scheduler bfq registered Nov 1 00:25:35.419507 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:25:35.419528 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:25:35.419536 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:25:35.419544 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:25:35.419552 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:25:35.419560 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:25:35.419568 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:25:35.419576 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:25:35.419587 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:25:35.419742 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:25:35.419754 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:25:35.419874 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:25:35.419994 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:25:34 UTC (1761956734) Nov 1 00:25:35.420005 kernel: hpet: Lost 2 RTC interrupts Nov 1 00:25:35.420121 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:25:35.420132 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:25:35.420143 kernel: efifb: probing for efifb Nov 1 00:25:35.420151 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 1 00:25:35.420159 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 1 00:25:35.420167 kernel: efifb: scrolling: redraw Nov 1 00:25:35.420174 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 1 00:25:35.420182 kernel: Console: switching to colour frame buffer device 100x37 Nov 1 00:25:35.420209 kernel: fb0: EFI VGA frame buffer device Nov 1 00:25:35.420219 kernel: pstore: Using crash dump compression: deflate Nov 1 00:25:35.420229 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:25:35.420239 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:25:35.420247 kernel: Segment Routing with IPv6 Nov 1 00:25:35.420264 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:25:35.420272 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:25:35.420281 kernel: Key type dns_resolver registered Nov 1 00:25:35.420289 kernel: IPI shorthand broadcast: enabled Nov 1 00:25:35.420297 kernel: sched_clock: Marking stable (1918004950, 503965094)->(3618723095, -1196753051) Nov 1 00:25:35.420305 kernel: registered taskstats version 1 Nov 1 00:25:35.420313 kernel: Loading compiled-in X.509 certificates Nov 1 00:25:35.420324 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:25:35.420332 kernel: Key type .fscrypt registered Nov 1 00:25:35.420340 kernel: Key type fscrypt-provisioning registered Nov 1 00:25:35.420348 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:25:35.420356 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:25:35.420364 kernel: ima: No architecture policies found Nov 1 00:25:35.420372 kernel: clk: Disabling unused clocks Nov 1 00:25:35.420380 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:25:35.420388 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:25:35.420399 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:25:35.420407 kernel: Run /init as init process Nov 1 00:25:35.420415 kernel: with arguments: Nov 1 00:25:35.420423 kernel: /init Nov 1 00:25:35.420432 kernel: with environment: Nov 1 00:25:35.420442 kernel: HOME=/ Nov 1 00:25:35.420452 kernel: TERM=linux Nov 1 00:25:35.420468 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:25:35.420484 systemd[1]: Detected virtualization kvm. Nov 1 00:25:35.420495 systemd[1]: Detected architecture x86-64. Nov 1 00:25:35.420505 systemd[1]: Running in initrd. Nov 1 00:25:35.420533 systemd[1]: No hostname configured, using default hostname. Nov 1 00:25:35.420544 systemd[1]: Hostname set to . Nov 1 00:25:35.420558 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:25:35.420569 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:25:35.420580 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:25:35.420591 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:25:35.420602 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:25:35.420613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:25:35.420624 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:25:35.420636 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:25:35.420651 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:25:35.420662 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:25:35.420670 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:25:35.420679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:25:35.420687 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:25:35.420696 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:25:35.420704 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:25:35.420715 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:25:35.420724 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:25:35.420733 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:25:35.420741 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:25:35.420750 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:25:35.420758 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:25:35.420767 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:25:35.420775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:25:35.420786 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:25:35.420795 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:25:35.420804 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:25:35.420812 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:25:35.420821 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:25:35.420830 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:25:35.420838 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:25:35.420847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:25:35.420856 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:25:35.420867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:25:35.420895 systemd-journald[194]: Collecting audit messages is disabled. Nov 1 00:25:35.420914 systemd-journald[194]: Journal started Nov 1 00:25:35.420935 systemd-journald[194]: Runtime Journal (/run/log/journal/d46268157d3149c89c816d394915ada1) is 6.0M, max 48.3M, 42.2M free. Nov 1 00:25:35.466560 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:25:35.468707 systemd-modules-load[195]: Inserted module 'overlay' Nov 1 00:25:35.468815 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:25:35.483818 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:25:35.509702 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:25:35.515887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:35.520866 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:25:35.527771 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:25:35.529456 systemd-modules-load[195]: Inserted module 'br_netfilter' Nov 1 00:25:35.531143 kernel: Bridge firewalling registered Nov 1 00:25:35.550874 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:25:35.565791 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:25:35.566226 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:25:35.567273 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:25:35.570415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:25:35.576470 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:25:35.580202 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:25:35.594347 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:25:35.601446 dracut-cmdline[225]: dracut-dracut-053 Nov 1 00:25:35.602695 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:25:35.620298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:25:35.629986 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:25:35.655620 systemd-resolved[235]: Positive Trust Anchors: Nov 1 00:25:35.655642 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:25:35.655674 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:25:35.658493 systemd-resolved[235]: Defaulting to hostname 'linux'. Nov 1 00:25:35.659787 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:25:35.672137 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:25:35.825540 kernel: SCSI subsystem initialized Nov 1 00:25:35.835556 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:25:35.846549 kernel: iscsi: registered transport (tcp) Nov 1 00:25:35.877345 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:25:35.877370 kernel: QLogic iSCSI HBA Driver Nov 1 00:25:35.932442 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:25:35.941804 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:25:35.986983 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:25:35.987052 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:25:35.989038 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:25:36.039563 kernel: raid6: avx2x4 gen() 28353 MB/s Nov 1 00:25:36.063568 kernel: raid6: avx2x2 gen() 30180 MB/s Nov 1 00:25:36.086634 kernel: raid6: avx2x1 gen() 24454 MB/s Nov 1 00:25:36.086685 kernel: raid6: using algorithm avx2x2 gen() 30180 MB/s Nov 1 00:25:36.115083 kernel: raid6: .... xor() 19722 MB/s, rmw enabled Nov 1 00:25:36.115135 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:25:36.147567 kernel: xor: automatically using best checksumming function avx Nov 1 00:25:36.316568 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:25:36.331763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:25:36.345011 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:25:36.361852 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 1 00:25:36.368966 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:25:36.419721 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:25:36.435471 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Nov 1 00:25:36.473651 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:25:36.492786 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:25:36.565940 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:25:36.588248 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:25:36.624672 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:25:36.632058 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:25:36.669893 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:25:36.687553 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 00:25:36.687811 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:25:36.688349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:25:36.692255 kernel: AES CTR mode by8 optimization enabled Nov 1 00:25:36.692284 kernel: libata version 3.00 loaded. Nov 1 00:25:36.719477 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:25:36.720337 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:25:36.730270 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:25:36.730318 kernel: GPT:9289727 != 19775487 Nov 1 00:25:36.730332 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:25:36.730345 kernel: GPT:9289727 != 19775487 Nov 1 00:25:36.730358 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:25:36.730371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:25:36.730384 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:25:36.730675 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:25:36.737547 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:25:36.737742 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:25:36.741745 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:25:36.760365 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:25:36.770191 kernel: scsi host0: ahci Nov 1 00:25:36.770425 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Nov 1 00:25:36.770446 kernel: scsi host1: ahci Nov 1 00:25:36.770703 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (472) Nov 1 00:25:36.770716 kernel: scsi host2: ahci Nov 1 00:25:36.774545 kernel: scsi host3: ahci Nov 1 00:25:36.776570 kernel: scsi host4: ahci Nov 1 00:25:36.779815 kernel: scsi host5: ahci Nov 1 00:25:36.780005 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 1 00:25:36.780023 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 1 00:25:36.782497 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 1 00:25:36.782555 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 1 00:25:36.784025 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 1 00:25:36.784212 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 00:25:36.789282 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 1 00:25:36.805272 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 00:25:36.817486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:25:36.823668 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 00:25:36.825745 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 00:25:36.843658 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:25:36.845566 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:25:36.845627 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:25:36.850196 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:25:36.853839 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:25:36.853928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:36.858373 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:25:36.863361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:25:36.885292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:36.914712 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:25:36.934363 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:25:37.043053 disk-uuid[562]: Primary Header is updated. Nov 1 00:25:37.043053 disk-uuid[562]: Secondary Entries is updated. Nov 1 00:25:37.043053 disk-uuid[562]: Secondary Header is updated. Nov 1 00:25:37.076548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:25:37.081568 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:25:37.107226 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:25:37.107297 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:25:37.107309 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:25:37.107319 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:25:37.108820 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:25:37.109534 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:25:37.122730 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:25:37.138917 kernel: ata3.00: applying bridge limits Nov 1 00:25:37.138957 kernel: ata3.00: configured for UDMA/100 Nov 1 00:25:37.144584 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:25:37.203493 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:25:37.203846 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:25:37.220749 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:25:38.122572 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:25:38.122644 disk-uuid[577]: The operation has completed successfully. Nov 1 00:25:38.153623 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:25:38.153750 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:25:38.198872 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:25:38.203143 sh[593]: Success Nov 1 00:25:38.216767 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:25:38.248930 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:25:38.264049 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:25:38.266574 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:25:38.286496 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:25:38.286569 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:25:38.286596 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:25:38.288391 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:25:38.289759 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:25:38.295650 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:25:38.296370 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:25:38.304688 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:25:38.307951 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:25:38.336847 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:25:38.336892 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:25:38.336907 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:25:38.341548 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:25:38.350350 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:25:38.353720 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:25:38.434308 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:25:38.455665 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:25:38.514054 systemd-networkd[771]: lo: Link UP Nov 1 00:25:38.514063 systemd-networkd[771]: lo: Gained carrier Nov 1 00:25:38.515764 systemd-networkd[771]: Enumeration completed Nov 1 00:25:38.515880 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:25:38.516195 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:25:38.516199 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:25:38.525182 systemd-networkd[771]: eth0: Link UP Nov 1 00:25:38.525186 systemd-networkd[771]: eth0: Gained carrier Nov 1 00:25:38.525193 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:25:38.528492 systemd[1]: Reached target network.target - Network. Nov 1 00:25:38.589629 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:25:38.658709 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:25:38.680682 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:25:38.763255 ignition[776]: Ignition 2.19.0 Nov 1 00:25:38.763266 ignition[776]: Stage: fetch-offline Nov 1 00:25:38.763317 ignition[776]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:38.763333 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:38.763440 ignition[776]: parsed url from cmdline: "" Nov 1 00:25:38.763445 ignition[776]: no config URL provided Nov 1 00:25:38.763452 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:25:38.763464 ignition[776]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:25:38.763496 ignition[776]: op(1): [started] loading QEMU firmware config module Nov 1 00:25:38.763503 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:25:38.795534 ignition[776]: op(1): [finished] loading QEMU firmware config module Nov 1 00:25:38.877102 ignition[776]: parsing config with SHA512: 1f5617db8c9762ae6b8e8ba76ad816e5072a5c0d81c184a10e3e76649556c8c4e8e1c76419dccfe442a64ed2022cedbe72ce8f5ee88b6666d0e9c396e1950068 Nov 1 00:25:38.881057 unknown[776]: fetched base config from "system" Nov 1 00:25:38.881070 unknown[776]: fetched user config from "qemu" Nov 1 00:25:38.881691 ignition[776]: fetch-offline: fetch-offline passed Nov 1 00:25:38.884051 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:25:38.881770 ignition[776]: Ignition finished successfully Nov 1 00:25:38.884486 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:25:38.895806 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:25:38.908911 ignition[786]: Ignition 2.19.0 Nov 1 00:25:38.908922 ignition[786]: Stage: kargs Nov 1 00:25:38.909095 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:38.909107 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:38.909933 ignition[786]: kargs: kargs passed Nov 1 00:25:38.909978 ignition[786]: Ignition finished successfully Nov 1 00:25:38.930726 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:25:38.935243 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:25:39.014196 ignition[794]: Ignition 2.19.0 Nov 1 00:25:39.014209 ignition[794]: Stage: disks Nov 1 00:25:39.014479 ignition[794]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:39.014496 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:39.015407 ignition[794]: disks: disks passed Nov 1 00:25:39.015456 ignition[794]: Ignition finished successfully Nov 1 00:25:39.024609 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:25:39.027877 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:25:39.027977 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:25:39.077067 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:25:39.077922 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:25:39.087107 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:25:39.099682 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:25:39.212696 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:25:39.700302 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:25:39.716700 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:25:39.876587 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:25:39.877337 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:25:39.880246 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:25:39.895672 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:25:39.913591 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Nov 1 00:25:39.913625 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:25:39.913637 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:25:39.913647 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:25:39.904323 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:25:39.919276 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:25:39.913986 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:25:39.914058 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:25:39.914097 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:25:39.921226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:25:39.924119 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:25:39.941706 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:25:39.979402 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:25:39.987507 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:25:39.994068 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:25:39.999323 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:25:40.107016 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:25:40.117649 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:25:40.118659 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:25:40.133371 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:25:40.189305 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:25:40.198542 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:25:40.301963 ignition[930]: INFO : Ignition 2.19.0 Nov 1 00:25:40.301963 ignition[930]: INFO : Stage: mount Nov 1 00:25:40.305616 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:40.305616 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:40.305616 ignition[930]: INFO : mount: mount passed Nov 1 00:25:40.305616 ignition[930]: INFO : Ignition finished successfully Nov 1 00:25:40.305971 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:25:40.372639 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:25:40.382886 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:25:40.415266 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Nov 1 00:25:40.415332 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:25:40.415349 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:25:40.418432 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:25:40.422557 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:25:40.425657 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:25:40.463619 ignition[957]: INFO : Ignition 2.19.0 Nov 1 00:25:40.463619 ignition[957]: INFO : Stage: files Nov 1 00:25:40.482155 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:40.482155 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:40.482155 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:25:40.482155 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:25:40.482155 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:25:40.467633 systemd-networkd[771]: eth0: Gained IPv6LL Nov 1 00:25:40.495286 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:25:40.495286 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:25:40.495286 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:25:40.495286 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:25:40.495286 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:25:40.495286 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:25:40.495286 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:25:40.484364 unknown[957]: wrote ssh authorized keys file for user: core Nov 1 00:25:40.532996 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:25:40.601107 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:25:40.601107 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:25:40.620160 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:25:40.623383 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:25:40.626660 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:25:40.629823 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:25:40.633042 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:25:40.636295 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:25:40.639631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:25:40.643298 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:25:40.691819 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:25:40.694997 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:25:40.699640 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:25:40.699640 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:25:40.708207 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:25:41.135379 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:25:42.411499 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:25:42.411499 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 1 00:25:42.668405 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 1 00:25:42.719337 ignition[957]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:25:42.879223 ignition[957]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:25:42.879223 ignition[957]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:25:42.879223 ignition[957]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:25:42.879223 ignition[957]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:25:42.879223 ignition[957]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:25:42.879223 ignition[957]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:25:42.879223 ignition[957]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:25:42.879223 ignition[957]: INFO : files: files passed Nov 1 00:25:42.879223 ignition[957]: INFO : Ignition finished successfully Nov 1 00:25:42.856655 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:25:42.885905 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:25:42.890447 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:25:42.894423 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:25:42.929234 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 00:25:42.894566 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:25:42.957730 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:25:42.957730 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:25:42.907798 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:25:42.965214 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:25:42.917571 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:25:42.928747 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:25:42.973902 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:25:42.974087 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:25:43.009974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:25:43.013672 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:25:43.017004 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:25:43.025743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:25:43.049320 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:25:43.055453 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:25:43.088931 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:25:43.089135 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:25:43.089411 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:25:43.170131 ignition[1012]: INFO : Ignition 2.19.0 Nov 1 00:25:43.170131 ignition[1012]: INFO : Stage: umount Nov 1 00:25:43.170131 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:43.170131 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:43.170131 ignition[1012]: INFO : umount: umount passed Nov 1 00:25:43.170131 ignition[1012]: INFO : Ignition finished successfully Nov 1 00:25:43.089943 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:25:43.090070 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:25:43.090507 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:25:43.091111 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:25:43.091379 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:25:43.091930 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:25:43.092240 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:25:43.092514 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:25:43.092788 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:25:43.093076 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:25:43.093348 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:25:43.093891 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:25:43.094149 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:25:43.094262 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:25:43.094999 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:25:43.095298 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:25:43.095547 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:25:43.095681 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:25:43.095822 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:25:43.095943 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:25:43.096379 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:25:43.096491 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:25:43.097010 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:25:43.097216 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:25:43.097333 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:25:43.097550 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:25:43.097803 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:25:43.098089 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:25:43.098189 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:25:43.098394 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:25:43.098485 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:25:43.098940 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:25:43.099062 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:25:43.099231 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:25:43.099338 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:25:43.100380 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:25:43.101538 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:25:43.101962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:25:43.102078 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:25:43.102329 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:25:43.102428 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:25:43.105588 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:25:43.105699 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:25:43.121802 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:25:43.121942 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:25:43.122462 systemd[1]: Stopped target network.target - Network. Nov 1 00:25:43.122899 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:25:43.122953 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:25:43.123216 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:25:43.123263 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:25:43.123484 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:25:43.123554 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:25:43.123759 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:25:43.123814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:25:43.629409 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Nov 1 00:25:43.124211 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:25:43.124443 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:25:43.129808 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:25:43.165648 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:25:43.165792 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:25:43.165835 systemd-networkd[771]: eth0: DHCPv6 lease lost Nov 1 00:25:43.170102 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:25:43.170299 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:25:43.173823 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:25:43.173946 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:25:43.177213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:25:43.177291 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:25:43.179820 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:25:43.179878 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:25:43.191607 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:25:43.193977 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:25:43.194053 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:25:43.197880 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:25:43.197936 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:25:43.201497 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:25:43.201569 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:25:43.203452 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:25:43.203503 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:25:43.207436 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:25:43.232486 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:25:43.232646 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:25:43.236314 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:25:43.236501 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:25:43.269942 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:25:43.270072 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:25:43.272787 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:25:43.272832 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:25:43.289191 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:25:43.289258 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:25:43.293318 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:25:43.293373 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:25:43.297432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:25:43.297491 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:25:43.368953 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:25:43.372715 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:25:43.372799 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:25:43.376811 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:25:43.376866 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:25:43.381083 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:25:43.381139 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:25:43.383479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:25:43.383550 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:43.401702 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:25:43.401820 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:25:43.439050 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:25:43.454982 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:25:43.507307 systemd[1]: Switching root. Nov 1 00:25:43.875076 systemd-journald[194]: Journal stopped Nov 1 00:25:45.937814 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:25:45.937898 kernel: SELinux: policy capability open_perms=1 Nov 1 00:25:45.937919 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:25:45.937934 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:25:45.937948 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:25:45.937972 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:25:45.937987 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:25:45.938003 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:25:45.938027 kernel: audit: type=1403 audit(1761956744.771:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:25:45.938048 systemd[1]: Successfully loaded SELinux policy in 100.325ms. Nov 1 00:25:45.938075 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.696ms. Nov 1 00:25:45.938098 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:25:45.938121 systemd[1]: Detected virtualization kvm. Nov 1 00:25:45.938138 systemd[1]: Detected architecture x86-64. Nov 1 00:25:45.938154 systemd[1]: Detected first boot. Nov 1 00:25:45.938171 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:25:45.938187 zram_generator::config[1074]: No configuration found. Nov 1 00:25:45.938205 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:25:45.938223 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:25:45.938244 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 00:25:45.938262 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:25:45.938280 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:25:45.938297 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:25:45.938313 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:25:45.938330 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:25:45.938346 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:25:45.938363 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:25:45.938380 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:25:45.938401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:25:45.938417 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:25:45.938435 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:25:45.938452 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:25:45.938476 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:25:45.938492 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:25:45.938507 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:25:45.938545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:25:45.938563 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:25:45.938583 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:25:45.938599 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:25:45.938615 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:25:45.938630 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:25:45.938646 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:25:45.938661 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:25:45.938678 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:25:45.938694 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:25:45.938713 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:25:45.938729 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:25:45.938745 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:25:45.938761 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:25:45.938783 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:25:45.938799 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:25:45.938817 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:25:45.938833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:45.938849 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:25:45.938868 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:25:45.938884 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:25:45.938900 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:25:45.938936 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:25:45.938974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:25:45.939010 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:25:45.939028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:25:45.939045 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:25:45.939066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:25:45.939083 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:25:45.939100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:25:45.939118 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:25:45.939134 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:25:45.939152 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:25:45.939169 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:25:45.939185 kernel: loop: module loaded Nov 1 00:25:45.939229 systemd-journald[1151]: Collecting audit messages is disabled. Nov 1 00:25:45.939268 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:25:45.939287 systemd-journald[1151]: Journal started Nov 1 00:25:45.939315 systemd-journald[1151]: Runtime Journal (/run/log/journal/d46268157d3149c89c816d394915ada1) is 6.0M, max 48.3M, 42.2M free. Nov 1 00:25:45.942636 kernel: fuse: init (API version 7.39) Nov 1 00:25:45.942691 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:25:45.953720 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:25:45.983659 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:25:45.988974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:45.994553 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:25:45.997513 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:25:46.009564 kernel: ACPI: bus type drm_connector registered Nov 1 00:25:46.011380 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:25:46.013436 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:25:46.015276 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:25:46.017356 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:25:46.019486 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:25:46.021656 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:25:46.024193 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:25:46.024459 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:25:46.043662 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:25:46.043928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:25:46.046266 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:25:46.046701 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:25:46.048934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:25:46.049217 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:25:46.051710 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:25:46.051991 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:25:46.054213 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:25:46.054500 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:25:46.056922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:25:46.059253 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:25:46.062040 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:25:46.151130 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:25:46.169648 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:25:46.187641 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:25:46.193452 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:25:46.195461 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:25:46.198027 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:25:46.201786 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:25:46.204133 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:25:46.205832 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:25:46.209042 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:25:46.212315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:25:46.215200 systemd-journald[1151]: Time spent on flushing to /var/log/journal/d46268157d3149c89c816d394915ada1 is 18.679ms for 979 entries. Nov 1 00:25:46.215200 systemd-journald[1151]: System Journal (/var/log/journal/d46268157d3149c89c816d394915ada1) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:25:47.050727 systemd-journald[1151]: Received client request to flush runtime journal. Nov 1 00:25:46.228720 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:25:46.234092 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:25:46.239591 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:25:46.241917 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:25:46.272173 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:25:46.334775 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:25:46.364183 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Nov 1 00:25:46.364202 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Nov 1 00:25:46.373451 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:25:46.469268 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:25:46.496660 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:25:46.570674 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:25:46.583837 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:25:46.601151 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Nov 1 00:25:46.601167 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Nov 1 00:25:46.607004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:25:46.839674 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:25:46.862425 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:25:47.053400 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:25:47.485298 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:25:47.499676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:25:47.525895 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Nov 1 00:25:47.542585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:25:47.562828 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:25:47.582814 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:25:47.603013 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1244) Nov 1 00:25:47.602053 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 1 00:25:47.742565 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:25:47.752567 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:25:47.758545 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 1 00:25:47.764207 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:25:47.765086 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:25:47.765285 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:25:47.811160 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:25:47.814783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:25:47.817001 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:25:47.862625 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:25:47.859415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:25:47.859861 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:47.926123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:25:47.949573 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:25:47.962069 kernel: kvm_amd: TSC scaling supported Nov 1 00:25:47.962103 kernel: kvm_amd: Nested Virtualization enabled Nov 1 00:25:47.962131 kernel: kvm_amd: Nested Paging enabled Nov 1 00:25:47.963124 kernel: kvm_amd: LBR virtualization supported Nov 1 00:25:47.964207 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 00:25:47.965559 kernel: kvm_amd: Virtual GIF supported Nov 1 00:25:47.979241 systemd-networkd[1249]: lo: Link UP Nov 1 00:25:47.979251 systemd-networkd[1249]: lo: Gained carrier Nov 1 00:25:47.980961 systemd-networkd[1249]: Enumeration completed Nov 1 00:25:47.981374 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:25:47.981378 systemd-networkd[1249]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:25:47.982041 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:25:47.982211 systemd-networkd[1249]: eth0: Link UP Nov 1 00:25:47.982216 systemd-networkd[1249]: eth0: Gained carrier Nov 1 00:25:47.982228 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:25:47.989547 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:25:47.990850 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:25:47.995578 systemd-networkd[1249]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:25:48.003711 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:48.015897 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:25:48.025677 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:25:48.037482 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:25:48.092675 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:25:48.094965 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:25:48.108646 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:25:48.113792 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:25:48.153624 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:25:48.155958 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:25:48.158108 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:25:48.158139 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:25:48.159870 systemd[1]: Reached target machines.target - Containers. Nov 1 00:25:48.162641 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:25:48.176772 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:25:48.180470 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:25:48.182389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:25:48.183398 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:25:48.186939 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:25:48.191741 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:25:48.195325 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:25:48.200218 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:25:48.379560 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:25:48.393555 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:25:48.425552 kernel: loop1: detected capacity change from 0 to 142488 Nov 1 00:25:48.566542 kernel: loop2: detected capacity change from 0 to 140768 Nov 1 00:25:48.615781 kernel: loop3: detected capacity change from 0 to 224512 Nov 1 00:25:48.629542 kernel: loop4: detected capacity change from 0 to 142488 Nov 1 00:25:48.644550 kernel: loop5: detected capacity change from 0 to 140768 Nov 1 00:25:48.653813 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 1 00:25:48.654634 (sd-merge)[1311]: Merged extensions into '/usr'. Nov 1 00:25:48.658937 systemd[1]: Reloading requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:25:48.658953 systemd[1]: Reloading... Nov 1 00:25:48.761658 zram_generator::config[1340]: No configuration found. Nov 1 00:25:48.799485 ldconfig[1298]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:25:48.894962 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:25:48.963178 systemd[1]: Reloading finished in 303 ms. Nov 1 00:25:48.985970 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:25:49.002641 systemd[1]: Starting ensure-sysext.service... Nov 1 00:25:49.005116 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:25:49.008923 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:25:49.008940 systemd[1]: Reloading... Nov 1 00:25:49.054758 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:25:49.055179 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:25:49.056235 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:25:49.056730 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Nov 1 00:25:49.056871 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Nov 1 00:25:49.062738 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:25:49.062826 systemd-tmpfiles[1383]: Skipping /boot Nov 1 00:25:49.077400 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:25:49.077415 systemd-tmpfiles[1383]: Skipping /boot Nov 1 00:25:49.101548 zram_generator::config[1416]: No configuration found. Nov 1 00:25:49.216078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:25:49.234657 systemd-networkd[1249]: eth0: Gained IPv6LL Nov 1 00:25:49.295037 systemd[1]: Reloading finished in 285 ms. Nov 1 00:25:49.316327 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:25:49.375174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:25:49.386992 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:25:49.454907 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:25:49.459302 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:25:49.464799 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:25:49.476615 augenrules[1479]: No rules Nov 1 00:25:49.477750 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:25:49.481232 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:25:49.495127 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:25:49.499072 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:49.499414 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:25:49.503581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:25:49.525157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:25:49.530804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:25:49.532904 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:25:49.533100 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:49.535026 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:25:49.537838 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:25:49.541256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:25:49.541618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:25:49.544477 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:25:49.547276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:25:49.547499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:25:49.570425 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:25:49.570693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:25:49.582114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:49.582450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:25:49.593917 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:25:49.597305 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:25:49.601945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:25:49.603660 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:25:49.605331 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:25:49.607002 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:25:49.607257 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:49.609191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:25:49.609592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:25:49.612101 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:25:49.612321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:25:49.616271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:25:49.616538 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:25:49.622025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:49.622256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:25:49.637785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:25:49.640824 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:25:49.643647 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:25:49.646691 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:25:49.648452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:25:49.648705 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:25:49.648840 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:49.650103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:25:49.650358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:25:49.651544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:25:49.651786 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:25:49.652674 systemd[1]: Finished ensure-sysext.service. Nov 1 00:25:49.657961 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:25:49.658211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:25:49.660512 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:25:49.660760 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:25:49.667160 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:25:49.667289 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:25:49.676760 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:25:49.682891 systemd-resolved[1476]: Positive Trust Anchors: Nov 1 00:25:49.682909 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:25:49.682941 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:25:49.683891 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:25:49.696887 systemd-resolved[1476]: Defaulting to hostname 'linux'. Nov 1 00:25:49.699339 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:25:49.701597 systemd[1]: Reached target network.target - Network. Nov 1 00:25:49.721934 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:25:49.723784 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:25:49.779680 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:25:50.595987 systemd-resolved[1476]: Clock change detected. Flushing caches. Nov 1 00:25:50.596077 systemd-timesyncd[1534]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:25:50.596122 systemd-timesyncd[1534]: Initial clock synchronization to Sat 2025-11-01 00:25:50.595920 UTC. Nov 1 00:25:50.597790 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:25:50.599748 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:25:50.601940 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:25:50.604104 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:25:50.606255 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:25:50.606295 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:25:50.607896 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:25:50.609847 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:25:50.611779 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:25:50.613919 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:25:50.616183 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:25:50.620551 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:25:50.623649 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:25:50.630998 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:25:50.631964 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:25:50.634409 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:25:50.637843 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:25:50.639570 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:25:50.641460 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:25:50.641515 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:25:50.641547 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:25:50.642993 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:25:50.646012 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 00:25:50.649370 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:25:50.653197 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:25:50.657240 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:25:50.659207 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:25:50.664121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:25:50.667239 jq[1546]: false Nov 1 00:25:50.670169 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:25:50.675198 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:25:50.677841 extend-filesystems[1547]: Found loop3 Nov 1 00:25:50.684714 extend-filesystems[1547]: Found loop4 Nov 1 00:25:50.684714 extend-filesystems[1547]: Found loop5 Nov 1 00:25:50.684714 extend-filesystems[1547]: Found sr0 Nov 1 00:25:50.684714 extend-filesystems[1547]: Found vda Nov 1 00:25:50.684714 extend-filesystems[1547]: Found vda1 Nov 1 00:25:50.684714 extend-filesystems[1547]: Found vda2 Nov 1 00:25:50.684714 extend-filesystems[1547]: Found vda3 Nov 1 00:25:50.684714 extend-filesystems[1547]: Found usr Nov 1 00:25:50.684714 extend-filesystems[1547]: Found vda4 Nov 1 00:25:50.684714 extend-filesystems[1547]: Found vda6 Nov 1 00:25:50.684714 extend-filesystems[1547]: Found vda7 Nov 1 00:25:50.684714 extend-filesystems[1547]: Found vda9 Nov 1 00:25:50.684714 extend-filesystems[1547]: Checking size of /dev/vda9 Nov 1 00:25:50.685073 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:25:50.692160 dbus-daemon[1544]: [system] SELinux support is enabled Nov 1 00:25:50.696643 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:25:50.714346 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1254) Nov 1 00:25:50.714136 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:25:50.717141 extend-filesystems[1547]: Resized partition /dev/vda9 Nov 1 00:25:50.729571 extend-filesystems[1568]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:25:50.752258 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:25:50.754772 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:25:50.759202 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:25:50.783408 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:25:50.784327 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:25:50.787551 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:25:50.793558 jq[1583]: true Nov 1 00:25:50.797464 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:25:50.797828 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:25:50.803819 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:25:50.804170 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:25:50.806666 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:25:50.809979 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:25:50.810305 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:25:50.824612 jq[1591]: true Nov 1 00:25:50.830471 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:25:50.837896 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 00:25:50.838312 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 00:25:50.881897 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:25:50.882007 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:25:50.882055 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:25:50.883527 update_engine[1580]: I20251101 00:25:50.882681 1580 main.cc:92] Flatcar Update Engine starting Nov 1 00:25:50.884014 update_engine[1580]: I20251101 00:25:50.883975 1580 update_check_scheduler.cc:74] Next update check in 11m18s Nov 1 00:25:50.884662 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:25:50.884689 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:25:50.886827 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:25:50.889320 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:25:50.902151 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:25:51.013415 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:25:51.049515 systemd-logind[1577]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:25:51.049545 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:25:51.050063 systemd-logind[1577]: New seat seat0. Nov 1 00:25:51.051810 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:25:51.059551 tar[1590]: linux-amd64/LICENSE Nov 1 00:25:51.059551 tar[1590]: linux-amd64/helm Nov 1 00:25:51.059893 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:25:51.085100 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:25:51.121349 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:25:51.132484 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:25:51.132912 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:25:51.141915 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:25:51.253052 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:25:51.261890 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:25:51.283493 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:25:51.302693 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:25:52.168757 containerd[1592]: time="2025-11-01T00:25:52.168553292Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:25:51.304776 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:25:52.195714 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:25:52.195714 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:25:52.195714 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:25:52.201912 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Nov 1 00:25:52.205201 containerd[1592]: time="2025-11-01T00:25:52.203218069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:52.206202 containerd[1592]: time="2025-11-01T00:25:52.206169763Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:25:52.206202 containerd[1592]: time="2025-11-01T00:25:52.206201944Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:25:52.206264 containerd[1592]: time="2025-11-01T00:25:52.206218064Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:25:52.206438 containerd[1592]: time="2025-11-01T00:25:52.206413590Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:25:52.206438 containerd[1592]: time="2025-11-01T00:25:52.206433438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:52.206520 containerd[1592]: time="2025-11-01T00:25:52.206503940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:25:52.206544 containerd[1592]: time="2025-11-01T00:25:52.206519779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:52.206806 containerd[1592]: time="2025-11-01T00:25:52.206777423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:25:52.206806 containerd[1592]: time="2025-11-01T00:25:52.206793583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:52.206865 containerd[1592]: time="2025-11-01T00:25:52.206806617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:25:52.206865 containerd[1592]: time="2025-11-01T00:25:52.206816496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:52.207516 containerd[1592]: time="2025-11-01T00:25:52.206931521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:52.207516 containerd[1592]: time="2025-11-01T00:25:52.207232636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:52.207516 containerd[1592]: time="2025-11-01T00:25:52.207412894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:25:52.207516 containerd[1592]: time="2025-11-01T00:25:52.207425698Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:25:52.207626 containerd[1592]: time="2025-11-01T00:25:52.207519354Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:25:52.207626 containerd[1592]: time="2025-11-01T00:25:52.207583825Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:25:52.207526 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:25:52.209416 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:25:52.473357 tar[1590]: linux-amd64/README.md Nov 1 00:25:52.491740 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:25:52.778910 bash[1624]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:25:52.781342 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:25:52.788488 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 00:25:52.934409 containerd[1592]: time="2025-11-01T00:25:52.934325826Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:25:52.934409 containerd[1592]: time="2025-11-01T00:25:52.934405605Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:25:52.934409 containerd[1592]: time="2025-11-01T00:25:52.934424090Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:25:52.934611 containerd[1592]: time="2025-11-01T00:25:52.934440901Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:25:52.934611 containerd[1592]: time="2025-11-01T00:25:52.934458094Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:25:52.934688 containerd[1592]: time="2025-11-01T00:25:52.934665723Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:25:52.935137 containerd[1592]: time="2025-11-01T00:25:52.935074349Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:25:52.935354 containerd[1592]: time="2025-11-01T00:25:52.935320491Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:25:52.935354 containerd[1592]: time="2025-11-01T00:25:52.935347201Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:25:52.935405 containerd[1592]: time="2025-11-01T00:25:52.935363040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:25:52.935405 containerd[1592]: time="2025-11-01T00:25:52.935379631Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:25:52.935405 containerd[1592]: time="2025-11-01T00:25:52.935394359Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:25:52.935480 containerd[1592]: time="2025-11-01T00:25:52.935416721Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:25:52.935480 containerd[1592]: time="2025-11-01T00:25:52.935435877Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:25:52.935480 containerd[1592]: time="2025-11-01T00:25:52.935454452Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:25:52.935480 containerd[1592]: time="2025-11-01T00:25:52.935468167Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:25:52.935560 containerd[1592]: time="2025-11-01T00:25:52.935484067Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:25:52.935560 containerd[1592]: time="2025-11-01T00:25:52.935499456Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:25:52.935560 containerd[1592]: time="2025-11-01T00:25:52.935523161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935560 containerd[1592]: time="2025-11-01T00:25:52.935539541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935560 containerd[1592]: time="2025-11-01T00:25:52.935553377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935664 containerd[1592]: time="2025-11-01T00:25:52.935568796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935664 containerd[1592]: time="2025-11-01T00:25:52.935583674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935664 containerd[1592]: time="2025-11-01T00:25:52.935606567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935664 containerd[1592]: time="2025-11-01T00:25:52.935620233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935664 containerd[1592]: time="2025-11-01T00:25:52.935637745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935664 containerd[1592]: time="2025-11-01T00:25:52.935658064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935795 containerd[1592]: time="2025-11-01T00:25:52.935683982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935795 containerd[1592]: time="2025-11-01T00:25:52.935699912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935795 containerd[1592]: time="2025-11-01T00:25:52.935714069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935795 containerd[1592]: time="2025-11-01T00:25:52.935728446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935795 containerd[1592]: time="2025-11-01T00:25:52.935749705Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:25:52.935795 containerd[1592]: time="2025-11-01T00:25:52.935778319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935795 containerd[1592]: time="2025-11-01T00:25:52.935793267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.935926 containerd[1592]: time="2025-11-01T00:25:52.935806512Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:25:52.935926 containerd[1592]: time="2025-11-01T00:25:52.935868729Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:25:52.935926 containerd[1592]: time="2025-11-01T00:25:52.935888435Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:25:52.935926 containerd[1592]: time="2025-11-01T00:25:52.935900538Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:25:52.935926 containerd[1592]: time="2025-11-01T00:25:52.935914715Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:25:52.935926 containerd[1592]: time="2025-11-01T00:25:52.935926968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.936068 containerd[1592]: time="2025-11-01T00:25:52.935963396Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:25:52.936068 containerd[1592]: time="2025-11-01T00:25:52.935983614Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:25:52.936068 containerd[1592]: time="2025-11-01T00:25:52.935994985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:25:52.936438 containerd[1592]: time="2025-11-01T00:25:52.936353758Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:25:52.936438 containerd[1592]: time="2025-11-01T00:25:52.936429340Z" level=info msg="Connect containerd service" Nov 1 00:25:52.936618 containerd[1592]: time="2025-11-01T00:25:52.936472721Z" level=info msg="using legacy CRI server" Nov 1 00:25:52.936618 containerd[1592]: time="2025-11-01T00:25:52.936482028Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:25:52.938423 containerd[1592]: time="2025-11-01T00:25:52.937650600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:25:52.938759 containerd[1592]: time="2025-11-01T00:25:52.938719243Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:25:52.940457 containerd[1592]: time="2025-11-01T00:25:52.939919103Z" level=info msg="Start subscribing containerd event" Nov 1 00:25:52.940457 containerd[1592]: time="2025-11-01T00:25:52.939983013Z" level=info msg="Start recovering state" Nov 1 00:25:52.942487 containerd[1592]: time="2025-11-01T00:25:52.942005415Z" level=info msg="Start event monitor" Nov 1 00:25:52.942487 containerd[1592]: time="2025-11-01T00:25:52.942046201Z" level=info msg="Start snapshots syncer" Nov 1 00:25:52.942487 containerd[1592]: time="2025-11-01T00:25:52.942059045Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:25:52.942487 containerd[1592]: time="2025-11-01T00:25:52.942066620Z" level=info msg="Start streaming server" Nov 1 00:25:52.942487 containerd[1592]: time="2025-11-01T00:25:52.942266044Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:25:52.942487 containerd[1592]: time="2025-11-01T00:25:52.942372152Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:25:52.942487 containerd[1592]: time="2025-11-01T00:25:52.942435792Z" level=info msg="containerd successfully booted in 1.474818s" Nov 1 00:25:52.942580 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:25:54.129702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:25:54.132508 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:25:54.135485 systemd[1]: Startup finished in 11.905s (kernel) + 8.647s (userspace) = 20.553s. Nov 1 00:25:54.140234 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:25:54.776545 kubelet[1677]: E1101 00:25:54.776394 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:25:54.781251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:25:54.781600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:26:00.028696 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:26:00.042383 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:38238.service - OpenSSH per-connection server daemon (10.0.0.1:38238). Nov 1 00:26:00.082541 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 38238 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:26:00.084883 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:00.094117 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:26:00.115485 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:26:00.117906 systemd-logind[1577]: New session 1 of user core. Nov 1 00:26:00.131407 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:26:00.139686 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:26:00.143383 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:00.265338 systemd[1697]: Queued start job for default target default.target. Nov 1 00:26:00.265782 systemd[1697]: Created slice app.slice - User Application Slice. Nov 1 00:26:00.265814 systemd[1697]: Reached target paths.target - Paths. Nov 1 00:26:00.265831 systemd[1697]: Reached target timers.target - Timers. Nov 1 00:26:00.278129 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:26:00.285865 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:26:00.285985 systemd[1697]: Reached target sockets.target - Sockets. Nov 1 00:26:00.286003 systemd[1697]: Reached target basic.target - Basic System. Nov 1 00:26:00.286113 systemd[1697]: Reached target default.target - Main User Target. Nov 1 00:26:00.286167 systemd[1697]: Startup finished in 135ms. Nov 1 00:26:00.286775 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:26:00.288847 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:26:00.351334 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:38250.service - OpenSSH per-connection server daemon (10.0.0.1:38250). Nov 1 00:26:00.385737 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 38250 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:26:00.387995 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:00.393015 systemd-logind[1577]: New session 2 of user core. Nov 1 00:26:00.402338 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:26:00.457573 sshd[1709]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:00.467258 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:38260.service - OpenSSH per-connection server daemon (10.0.0.1:38260). Nov 1 00:26:00.467724 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:38250.service: Deactivated successfully. Nov 1 00:26:00.470208 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:26:00.471699 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:26:00.472441 systemd-logind[1577]: Removed session 2. Nov 1 00:26:00.498039 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 38260 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:26:00.499718 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:00.504637 systemd-logind[1577]: New session 3 of user core. Nov 1 00:26:00.518497 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:26:00.570419 sshd[1714]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:00.581472 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:38268.service - OpenSSH per-connection server daemon (10.0.0.1:38268). Nov 1 00:26:00.582180 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:38260.service: Deactivated successfully. Nov 1 00:26:00.586463 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:26:00.587484 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:26:00.588486 systemd-logind[1577]: Removed session 3. Nov 1 00:26:00.614750 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 38268 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:26:00.616533 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:00.621051 systemd-logind[1577]: New session 4 of user core. Nov 1 00:26:00.630383 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:26:00.687251 sshd[1722]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:00.695487 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:38278.service - OpenSSH per-connection server daemon (10.0.0.1:38278). Nov 1 00:26:00.696220 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:38268.service: Deactivated successfully. Nov 1 00:26:00.700158 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:26:00.701321 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:26:00.704107 systemd-logind[1577]: Removed session 4. Nov 1 00:26:00.727206 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 38278 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:26:00.729068 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:00.733812 systemd-logind[1577]: New session 5 of user core. Nov 1 00:26:00.743457 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:26:00.802583 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:26:00.802986 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:26:00.820648 sudo[1737]: pam_unix(sudo:session): session closed for user root Nov 1 00:26:00.822708 sshd[1730]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:00.831268 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:38288.service - OpenSSH per-connection server daemon (10.0.0.1:38288). Nov 1 00:26:00.831802 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:38278.service: Deactivated successfully. Nov 1 00:26:00.833735 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:26:00.834496 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:26:00.836132 systemd-logind[1577]: Removed session 5. Nov 1 00:26:00.862340 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 38288 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:26:00.864158 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:00.868596 systemd-logind[1577]: New session 6 of user core. Nov 1 00:26:00.879298 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:26:00.934496 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:26:00.934840 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:26:00.938663 sudo[1747]: pam_unix(sudo:session): session closed for user root Nov 1 00:26:00.945160 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:26:00.945530 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:26:00.968243 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:26:00.970402 auditctl[1750]: No rules Nov 1 00:26:00.971867 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:26:00.972278 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:26:00.974291 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:26:01.008858 augenrules[1769]: No rules Nov 1 00:26:01.010929 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:26:01.013002 sudo[1746]: pam_unix(sudo:session): session closed for user root Nov 1 00:26:01.015375 sshd[1740]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:01.025305 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:38290.service - OpenSSH per-connection server daemon (10.0.0.1:38290). Nov 1 00:26:01.025880 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:38288.service: Deactivated successfully. Nov 1 00:26:01.029581 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:26:01.030814 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:26:01.032045 systemd-logind[1577]: Removed session 6. Nov 1 00:26:01.056322 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 38290 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:26:01.057948 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:01.062268 systemd-logind[1577]: New session 7 of user core. Nov 1 00:26:01.076323 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:26:01.130750 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:26:01.131160 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:26:01.661231 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:26:01.661501 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:26:02.355883 dockerd[1800]: time="2025-11-01T00:26:02.355789057Z" level=info msg="Starting up" Nov 1 00:26:03.309504 dockerd[1800]: time="2025-11-01T00:26:03.309449931Z" level=info msg="Loading containers: start." Nov 1 00:26:03.445049 kernel: Initializing XFRM netlink socket Nov 1 00:26:03.529270 systemd-networkd[1249]: docker0: Link UP Nov 1 00:26:03.553623 dockerd[1800]: time="2025-11-01T00:26:03.553583180Z" level=info msg="Loading containers: done." Nov 1 00:26:03.585973 dockerd[1800]: time="2025-11-01T00:26:03.585836195Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:26:03.586156 dockerd[1800]: time="2025-11-01T00:26:03.585974815Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:26:03.586199 dockerd[1800]: time="2025-11-01T00:26:03.586156485Z" level=info msg="Daemon has completed initialization" Nov 1 00:26:03.628191 dockerd[1800]: time="2025-11-01T00:26:03.628116831Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:26:03.628342 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:26:04.698232 containerd[1592]: time="2025-11-01T00:26:04.698185509Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:26:05.031680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:26:05.046210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:26:05.275844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:26:05.282325 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:26:05.348341 kubelet[1961]: E1101 00:26:05.348274 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:26:05.355342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:26:05.355701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:26:05.842970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272428973.mount: Deactivated successfully. Nov 1 00:26:07.374956 containerd[1592]: time="2025-11-01T00:26:07.374860725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:07.386517 containerd[1592]: time="2025-11-01T00:26:07.386416632Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 1 00:26:07.389336 containerd[1592]: time="2025-11-01T00:26:07.389287315Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:07.392942 containerd[1592]: time="2025-11-01T00:26:07.392912161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:07.394241 containerd[1592]: time="2025-11-01T00:26:07.394207951Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.695981384s" Nov 1 00:26:07.394295 containerd[1592]: time="2025-11-01T00:26:07.394251232Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:26:07.395070 containerd[1592]: time="2025-11-01T00:26:07.395037697Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:26:08.722015 containerd[1592]: time="2025-11-01T00:26:08.721942342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:08.722753 containerd[1592]: time="2025-11-01T00:26:08.722707948Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 1 00:26:08.724057 containerd[1592]: time="2025-11-01T00:26:08.723999049Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:08.726954 containerd[1592]: time="2025-11-01T00:26:08.726917671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:08.728246 containerd[1592]: time="2025-11-01T00:26:08.728183515Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.333116243s" Nov 1 00:26:08.728246 containerd[1592]: time="2025-11-01T00:26:08.728227758Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:26:08.728732 containerd[1592]: time="2025-11-01T00:26:08.728661551Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:26:10.430480 containerd[1592]: time="2025-11-01T00:26:10.430391604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:10.431312 containerd[1592]: time="2025-11-01T00:26:10.431238562Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 1 00:26:10.432672 containerd[1592]: time="2025-11-01T00:26:10.432635021Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:10.436000 containerd[1592]: time="2025-11-01T00:26:10.435928907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:10.437376 containerd[1592]: time="2025-11-01T00:26:10.437334462Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.708633237s" Nov 1 00:26:10.437433 containerd[1592]: time="2025-11-01T00:26:10.437374347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:26:10.438241 containerd[1592]: time="2025-11-01T00:26:10.438217688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:26:12.681052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889452527.mount: Deactivated successfully. Nov 1 00:26:13.824185 containerd[1592]: time="2025-11-01T00:26:13.824075411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:13.830300 containerd[1592]: time="2025-11-01T00:26:13.830169246Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 1 00:26:13.831563 containerd[1592]: time="2025-11-01T00:26:13.831521732Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:13.834041 containerd[1592]: time="2025-11-01T00:26:13.833969121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:13.834789 containerd[1592]: time="2025-11-01T00:26:13.834734486Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.396485249s" Nov 1 00:26:13.834789 containerd[1592]: time="2025-11-01T00:26:13.834774421Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:26:13.835493 containerd[1592]: time="2025-11-01T00:26:13.835459886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:26:14.361668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2757473589.mount: Deactivated successfully. Nov 1 00:26:15.389712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:26:15.459811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:26:15.738206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:26:15.752714 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:26:15.941355 kubelet[2110]: E1101 00:26:15.941264 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:26:15.945831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:26:15.946170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:26:16.973855 containerd[1592]: time="2025-11-01T00:26:16.973759818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:16.987711 containerd[1592]: time="2025-11-01T00:26:16.987607192Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 1 00:26:16.989985 containerd[1592]: time="2025-11-01T00:26:16.989858824Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:16.999801 containerd[1592]: time="2025-11-01T00:26:16.999672765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:17.001559 containerd[1592]: time="2025-11-01T00:26:17.001500241Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.165996844s" Nov 1 00:26:17.001559 containerd[1592]: time="2025-11-01T00:26:17.001546799Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:26:17.002962 containerd[1592]: time="2025-11-01T00:26:17.002903973Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:26:17.678706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914960463.mount: Deactivated successfully. Nov 1 00:26:17.686832 containerd[1592]: time="2025-11-01T00:26:17.686765604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:17.687976 containerd[1592]: time="2025-11-01T00:26:17.687889421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 00:26:17.694368 containerd[1592]: time="2025-11-01T00:26:17.694279793Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:17.700823 containerd[1592]: time="2025-11-01T00:26:17.700755384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:17.702375 containerd[1592]: time="2025-11-01T00:26:17.702329726Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 699.390126ms" Nov 1 00:26:17.702375 containerd[1592]: time="2025-11-01T00:26:17.702367788Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:26:17.703291 containerd[1592]: time="2025-11-01T00:26:17.703248800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:26:21.185992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688331988.mount: Deactivated successfully. Nov 1 00:26:24.205442 containerd[1592]: time="2025-11-01T00:26:24.205345897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:24.206995 containerd[1592]: time="2025-11-01T00:26:24.206175096Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 1 00:26:24.207701 containerd[1592]: time="2025-11-01T00:26:24.207628530Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:24.211554 containerd[1592]: time="2025-11-01T00:26:24.211474408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:24.213234 containerd[1592]: time="2025-11-01T00:26:24.213182292Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.509894268s" Nov 1 00:26:24.213234 containerd[1592]: time="2025-11-01T00:26:24.213232019Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:26:26.139695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:26:26.153262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:26:26.324933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:26:26.331516 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:26:26.390003 kubelet[2212]: E1101 00:26:26.389743 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:26:26.395152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:26:26.395464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:26:26.823082 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:26:26.836229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:26:26.868763 systemd[1]: Reloading requested from client PID 2230 ('systemctl') (unit session-7.scope)... Nov 1 00:26:26.868782 systemd[1]: Reloading... Nov 1 00:26:26.962163 zram_generator::config[2272]: No configuration found. Nov 1 00:26:28.325719 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:26:28.402737 systemd[1]: Reloading finished in 1533 ms. Nov 1 00:26:28.450607 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:26:28.450760 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:26:28.451216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:26:28.454316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:26:28.680429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:26:28.686244 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:26:28.795880 kubelet[2330]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:26:28.795880 kubelet[2330]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:26:28.795880 kubelet[2330]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:26:28.796349 kubelet[2330]: I1101 00:26:28.795952 2330 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:26:29.069477 kubelet[2330]: I1101 00:26:29.069333 2330 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:26:29.069477 kubelet[2330]: I1101 00:26:29.069374 2330 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:26:29.069700 kubelet[2330]: I1101 00:26:29.069669 2330 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:26:29.117627 kubelet[2330]: I1101 00:26:29.117532 2330 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:26:29.127567 kubelet[2330]: E1101 00:26:29.127520 2330 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:29.193875 kubelet[2330]: E1101 00:26:29.193829 2330 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:26:29.193875 kubelet[2330]: I1101 00:26:29.193875 2330 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:26:29.199816 kubelet[2330]: I1101 00:26:29.199761 2330 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:26:29.232198 kubelet[2330]: I1101 00:26:29.232113 2330 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:26:29.232467 kubelet[2330]: I1101 00:26:29.232196 2330 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:26:29.232571 kubelet[2330]: I1101 00:26:29.232480 2330 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:26:29.232571 kubelet[2330]: I1101 00:26:29.232494 2330 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:26:29.232777 kubelet[2330]: I1101 00:26:29.232744 2330 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:26:30.689075 kubelet[2330]: I1101 00:26:30.688985 2330 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:26:30.689075 kubelet[2330]: I1101 00:26:30.689078 2330 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:26:30.689728 kubelet[2330]: I1101 00:26:30.689123 2330 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:26:30.689728 kubelet[2330]: I1101 00:26:30.689145 2330 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:26:30.696313 kubelet[2330]: W1101 00:26:30.696132 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Nov 1 00:26:30.696313 kubelet[2330]: E1101 00:26:30.696224 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:30.697151 kubelet[2330]: I1101 00:26:30.697117 2330 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:26:30.697490 kubelet[2330]: W1101 00:26:30.697435 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Nov 1 00:26:30.697546 kubelet[2330]: E1101 00:26:30.697495 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:30.697639 kubelet[2330]: I1101 00:26:30.697615 2330 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:26:30.697738 kubelet[2330]: W1101 00:26:30.697712 2330 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:26:30.716831 kubelet[2330]: I1101 00:26:30.716789 2330 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:26:30.716831 kubelet[2330]: I1101 00:26:30.716835 2330 server.go:1287] "Started kubelet" Nov 1 00:26:30.716953 kubelet[2330]: I1101 00:26:30.716917 2330 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:26:30.718041 kubelet[2330]: I1101 00:26:30.718005 2330 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:26:30.718447 kubelet[2330]: I1101 00:26:30.718416 2330 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:26:30.720123 kubelet[2330]: I1101 00:26:30.719494 2330 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:26:30.720485 kubelet[2330]: I1101 00:26:30.720451 2330 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:26:30.722348 kubelet[2330]: I1101 00:26:30.722326 2330 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:26:30.725005 kubelet[2330]: E1101 00:26:30.724626 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:26:30.725005 kubelet[2330]: I1101 00:26:30.724674 2330 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:26:30.725005 kubelet[2330]: I1101 00:26:30.724869 2330 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:26:30.725005 kubelet[2330]: I1101 00:26:30.724923 2330 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:26:30.725426 kubelet[2330]: W1101 00:26:30.725374 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Nov 1 00:26:30.725467 kubelet[2330]: E1101 00:26:30.725424 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:30.725685 kubelet[2330]: I1101 00:26:30.725648 2330 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:26:30.745741 kubelet[2330]: E1101 00:26:30.745621 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" Nov 1 00:26:30.746082 kubelet[2330]: I1101 00:26:30.746059 2330 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:26:30.746082 kubelet[2330]: I1101 00:26:30.746075 2330 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:26:30.749199 kubelet[2330]: E1101 00:26:30.748380 2330 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:26:30.763434 kubelet[2330]: I1101 00:26:30.763370 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:26:30.765196 kubelet[2330]: I1101 00:26:30.765167 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:26:30.765244 kubelet[2330]: I1101 00:26:30.765210 2330 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:26:30.765291 kubelet[2330]: I1101 00:26:30.765243 2330 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:26:30.765291 kubelet[2330]: I1101 00:26:30.765251 2330 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:26:30.765337 kubelet[2330]: E1101 00:26:30.765315 2330 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:26:30.829478 kubelet[2330]: E1101 00:26:30.829434 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:26:30.830120 kubelet[2330]: W1101 00:26:30.830089 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Nov 1 00:26:30.830182 kubelet[2330]: E1101 00:26:30.830133 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:30.835813 kubelet[2330]: E1101 00:26:30.830054 2330 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873ba5dac4344c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:26:30.716810433 +0000 UTC m=+2.025818709,LastTimestamp:2025-11-01 00:26:30.716810433 +0000 UTC m=+2.025818709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:26:30.845095 kubelet[2330]: I1101 00:26:30.845072 2330 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:26:30.845095 kubelet[2330]: I1101 00:26:30.845088 2330 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:26:30.845201 kubelet[2330]: I1101 00:26:30.845111 2330 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:26:30.866344 kubelet[2330]: E1101 00:26:30.866290 2330 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:26:30.911617 kubelet[2330]: I1101 00:26:30.911514 2330 policy_none.go:49] "None policy: Start" Nov 1 00:26:30.911617 kubelet[2330]: I1101 00:26:30.911592 2330 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:26:30.911617 kubelet[2330]: I1101 00:26:30.911630 2330 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:26:30.929966 kubelet[2330]: E1101 00:26:30.929875 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:26:30.946678 kubelet[2330]: E1101 00:26:30.946538 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" Nov 1 00:26:30.948030 kubelet[2330]: I1101 00:26:30.946909 2330 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:26:30.948030 kubelet[2330]: I1101 00:26:30.947158 2330 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:26:30.948030 kubelet[2330]: I1101 00:26:30.947176 2330 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:26:30.948143 kubelet[2330]: I1101 00:26:30.948090 2330 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:26:30.949232 kubelet[2330]: E1101 00:26:30.949187 2330 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:26:30.949232 kubelet[2330]: E1101 00:26:30.949239 2330 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:26:31.049567 kubelet[2330]: I1101 00:26:31.049507 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:31.050085 kubelet[2330]: E1101 00:26:31.050052 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Nov 1 00:26:31.072683 kubelet[2330]: E1101 00:26:31.072630 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:31.073649 kubelet[2330]: E1101 00:26:31.073615 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:31.076398 kubelet[2330]: E1101 00:26:31.076362 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:31.226966 kubelet[2330]: I1101 00:26:31.226797 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:31.226966 kubelet[2330]: I1101 00:26:31.226854 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:31.226966 kubelet[2330]: I1101 00:26:31.226881 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:31.226966 kubelet[2330]: I1101 00:26:31.226933 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e74c89f2f0a02ffcae2e791cafcbd486-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e74c89f2f0a02ffcae2e791cafcbd486\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:31.226966 kubelet[2330]: I1101 00:26:31.226952 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e74c89f2f0a02ffcae2e791cafcbd486-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e74c89f2f0a02ffcae2e791cafcbd486\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:31.227212 kubelet[2330]: I1101 00:26:31.226972 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:31.227212 kubelet[2330]: I1101 00:26:31.226991 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:31.227212 kubelet[2330]: I1101 00:26:31.227011 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:31.227212 kubelet[2330]: I1101 00:26:31.227142 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e74c89f2f0a02ffcae2e791cafcbd486-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e74c89f2f0a02ffcae2e791cafcbd486\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:31.252043 kubelet[2330]: I1101 00:26:31.252003 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:31.252441 kubelet[2330]: E1101 00:26:31.252401 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Nov 1 00:26:31.257272 kubelet[2330]: E1101 00:26:31.257237 2330 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:31.347647 kubelet[2330]: E1101 00:26:31.347583 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" Nov 1 00:26:31.373825 kubelet[2330]: E1101 00:26:31.373791 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:31.374066 kubelet[2330]: E1101 00:26:31.374034 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:31.374674 containerd[1592]: time="2025-11-01T00:26:31.374489550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e74c89f2f0a02ffcae2e791cafcbd486,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:31.374674 containerd[1592]: time="2025-11-01T00:26:31.374522663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:31.377643 kubelet[2330]: E1101 00:26:31.377613 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:31.377938 containerd[1592]: time="2025-11-01T00:26:31.377884070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:31.581071 kubelet[2330]: W1101 00:26:31.580909 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Nov 1 00:26:31.581071 kubelet[2330]: E1101 00:26:31.580979 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:31.654313 kubelet[2330]: I1101 00:26:31.654266 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:31.654703 kubelet[2330]: E1101 00:26:31.654661 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Nov 1 00:26:31.764418 kubelet[2330]: W1101 00:26:31.764361 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Nov 1 00:26:31.764822 kubelet[2330]: E1101 00:26:31.764427 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:32.020766 kubelet[2330]: W1101 00:26:32.020675 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Nov 1 00:26:32.020865 kubelet[2330]: E1101 00:26:32.020780 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:32.149377 kubelet[2330]: E1101 00:26:32.149265 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="1.6s" Nov 1 00:26:32.327691 kubelet[2330]: W1101 00:26:32.327511 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Nov 1 00:26:32.327691 kubelet[2330]: E1101 00:26:32.327580 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:32.373545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162355856.mount: Deactivated successfully. Nov 1 00:26:32.380307 containerd[1592]: time="2025-11-01T00:26:32.380260851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:26:32.381407 containerd[1592]: time="2025-11-01T00:26:32.381351673Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:26:32.382614 containerd[1592]: time="2025-11-01T00:26:32.382551620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:26:32.383695 containerd[1592]: time="2025-11-01T00:26:32.383658973Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:26:32.384695 containerd[1592]: time="2025-11-01T00:26:32.384648129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:26:32.385710 containerd[1592]: time="2025-11-01T00:26:32.385669958Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:26:32.386875 containerd[1592]: time="2025-11-01T00:26:32.386824740Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:26:32.393470 containerd[1592]: time="2025-11-01T00:26:32.393037429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:26:32.394380 containerd[1592]: time="2025-11-01T00:26:32.394330023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.016386589s" Nov 1 00:26:32.395930 containerd[1592]: time="2025-11-01T00:26:32.395882304Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.02130684s" Nov 1 00:26:32.397472 containerd[1592]: time="2025-11-01T00:26:32.397441096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.022874601s" Nov 1 00:26:32.457275 kubelet[2330]: I1101 00:26:32.457231 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:32.457846 kubelet[2330]: E1101 00:26:32.457710 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Nov 1 00:26:32.688596 containerd[1592]: time="2025-11-01T00:26:32.688131730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:32.688596 containerd[1592]: time="2025-11-01T00:26:32.688224406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:32.688596 containerd[1592]: time="2025-11-01T00:26:32.688259754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:32.689322 containerd[1592]: time="2025-11-01T00:26:32.689151134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:32.689322 containerd[1592]: time="2025-11-01T00:26:32.688944880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:32.689322 containerd[1592]: time="2025-11-01T00:26:32.689057756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:32.689322 containerd[1592]: time="2025-11-01T00:26:32.689076942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:32.689322 containerd[1592]: time="2025-11-01T00:26:32.689212219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:32.697424 containerd[1592]: time="2025-11-01T00:26:32.696897777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:32.697424 containerd[1592]: time="2025-11-01T00:26:32.696968752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:32.697424 containerd[1592]: time="2025-11-01T00:26:32.697060788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:32.697424 containerd[1592]: time="2025-11-01T00:26:32.697170137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:32.879115 containerd[1592]: time="2025-11-01T00:26:32.878957699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"35821c74a6d43b31cfcd5a45c35955615abb72bc4cdaf1e512a11bfd7bc14eb3\"" Nov 1 00:26:32.881152 kubelet[2330]: E1101 00:26:32.880478 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:32.882878 containerd[1592]: time="2025-11-01T00:26:32.882845273Z" level=info msg="CreateContainer within sandbox \"35821c74a6d43b31cfcd5a45c35955615abb72bc4cdaf1e512a11bfd7bc14eb3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:26:32.884193 containerd[1592]: time="2025-11-01T00:26:32.884152566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"761e82a0aa5ae9dab91b9fea730ebed93daf0100952a3d039e49f2a1d1ea811d\"" Nov 1 00:26:32.884856 kubelet[2330]: E1101 00:26:32.884722 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:32.886709 containerd[1592]: time="2025-11-01T00:26:32.886670658Z" level=info msg="CreateContainer within sandbox \"761e82a0aa5ae9dab91b9fea730ebed93daf0100952a3d039e49f2a1d1ea811d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:26:32.911853 containerd[1592]: time="2025-11-01T00:26:32.911765556Z" level=info msg="CreateContainer within sandbox \"35821c74a6d43b31cfcd5a45c35955615abb72bc4cdaf1e512a11bfd7bc14eb3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"056e19d765b5e42fd1228f8714ae69119800abcd1a47965fc56f9b11b893f0b4\"" Nov 1 00:26:32.912774 containerd[1592]: time="2025-11-01T00:26:32.912741869Z" level=info msg="StartContainer for \"056e19d765b5e42fd1228f8714ae69119800abcd1a47965fc56f9b11b893f0b4\"" Nov 1 00:26:32.968056 containerd[1592]: time="2025-11-01T00:26:32.967828767Z" level=info msg="CreateContainer within sandbox \"761e82a0aa5ae9dab91b9fea730ebed93daf0100952a3d039e49f2a1d1ea811d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b43efc51247115b2fde6805f9ee71884bf9b54e573bfb257446b9a37ec7f7f2b\"" Nov 1 00:26:32.968724 containerd[1592]: time="2025-11-01T00:26:32.968343768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e74c89f2f0a02ffcae2e791cafcbd486,Namespace:kube-system,Attempt:0,} returns sandbox id \"cabe1169473c303ed23b69fbd652fe2aaaebe0919975e710891e014e6cd1b240\"" Nov 1 00:26:32.968724 containerd[1592]: time="2025-11-01T00:26:32.968552567Z" level=info msg="StartContainer for \"b43efc51247115b2fde6805f9ee71884bf9b54e573bfb257446b9a37ec7f7f2b\"" Nov 1 00:26:32.970086 kubelet[2330]: E1101 00:26:32.970047 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:32.974246 containerd[1592]: time="2025-11-01T00:26:32.974196270Z" level=info msg="CreateContainer within sandbox \"cabe1169473c303ed23b69fbd652fe2aaaebe0919975e710891e014e6cd1b240\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:26:32.996461 containerd[1592]: time="2025-11-01T00:26:32.996334881Z" level=info msg="CreateContainer within sandbox \"cabe1169473c303ed23b69fbd652fe2aaaebe0919975e710891e014e6cd1b240\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"93588bc8974ab31a009c7be1584c2c9fd2e7393132e0ebfd383f606a78b9abe7\"" Nov 1 00:26:32.997041 containerd[1592]: time="2025-11-01T00:26:32.996982536Z" level=info msg="StartContainer for \"93588bc8974ab31a009c7be1584c2c9fd2e7393132e0ebfd383f606a78b9abe7\"" Nov 1 00:26:33.029549 containerd[1592]: time="2025-11-01T00:26:33.029503792Z" level=info msg="StartContainer for \"056e19d765b5e42fd1228f8714ae69119800abcd1a47965fc56f9b11b893f0b4\" returns successfully" Nov 1 00:26:33.064617 containerd[1592]: time="2025-11-01T00:26:33.063937822Z" level=info msg="StartContainer for \"b43efc51247115b2fde6805f9ee71884bf9b54e573bfb257446b9a37ec7f7f2b\" returns successfully" Nov 1 00:26:33.090374 containerd[1592]: time="2025-11-01T00:26:33.090321526Z" level=info msg="StartContainer for \"93588bc8974ab31a009c7be1584c2c9fd2e7393132e0ebfd383f606a78b9abe7\" returns successfully" Nov 1 00:26:33.844791 kubelet[2330]: E1101 00:26:33.844708 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:33.845178 kubelet[2330]: E1101 00:26:33.845094 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:33.848642 kubelet[2330]: E1101 00:26:33.848570 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:33.848796 kubelet[2330]: E1101 00:26:33.848760 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:33.851889 kubelet[2330]: E1101 00:26:33.851103 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:33.851889 kubelet[2330]: E1101 00:26:33.851203 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:34.060192 kubelet[2330]: I1101 00:26:34.059855 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:34.682067 kubelet[2330]: E1101 00:26:34.681986 2330 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:26:34.698851 kubelet[2330]: I1101 00:26:34.698801 2330 apiserver.go:52] "Watching apiserver" Nov 1 00:26:34.725762 kubelet[2330]: I1101 00:26:34.725714 2330 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:26:34.853679 kubelet[2330]: E1101 00:26:34.853633 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:34.853812 kubelet[2330]: E1101 00:26:34.853791 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:34.854103 kubelet[2330]: E1101 00:26:34.854079 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:34.854192 kubelet[2330]: E1101 00:26:34.854175 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:35.094625 kubelet[2330]: I1101 00:26:35.094290 2330 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:26:35.126590 kubelet[2330]: I1101 00:26:35.126536 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:35.464469 kubelet[2330]: E1101 00:26:35.464419 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:35.464469 kubelet[2330]: I1101 00:26:35.464455 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:35.466033 kubelet[2330]: E1101 00:26:35.465981 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:35.466033 kubelet[2330]: I1101 00:26:35.466003 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:35.467149 kubelet[2330]: E1101 00:26:35.467125 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:36.590271 update_engine[1580]: I20251101 00:26:36.590157 1580 update_attempter.cc:509] Updating boot flags... Nov 1 00:26:36.625126 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2610) Nov 1 00:26:36.677222 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2610) Nov 1 00:26:37.402466 kubelet[2330]: I1101 00:26:37.402422 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:37.664212 kubelet[2330]: E1101 00:26:37.664066 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:37.859124 kubelet[2330]: E1101 00:26:37.859073 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:40.238801 systemd[1]: Reloading requested from client PID 2619 ('systemctl') (unit session-7.scope)... Nov 1 00:26:40.238817 systemd[1]: Reloading... Nov 1 00:26:40.325061 zram_generator::config[2661]: No configuration found. Nov 1 00:26:40.448416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:26:40.534069 systemd[1]: Reloading finished in 294 ms. Nov 1 00:26:40.570774 kubelet[2330]: I1101 00:26:40.570722 2330 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:26:40.570781 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:26:40.587606 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:26:40.588134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:26:40.609365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:26:40.799837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:26:40.806266 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:26:40.843400 kubelet[2713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:26:40.843400 kubelet[2713]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:26:40.843400 kubelet[2713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:26:40.843909 kubelet[2713]: I1101 00:26:40.843467 2713 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:26:40.850772 kubelet[2713]: I1101 00:26:40.850732 2713 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:26:40.850772 kubelet[2713]: I1101 00:26:40.850756 2713 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:26:40.851104 kubelet[2713]: I1101 00:26:40.851077 2713 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:26:40.852315 kubelet[2713]: I1101 00:26:40.852296 2713 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:26:40.854958 kubelet[2713]: I1101 00:26:40.854810 2713 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:26:40.858134 kubelet[2713]: E1101 00:26:40.858067 2713 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:26:40.858189 kubelet[2713]: I1101 00:26:40.858135 2713 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:26:40.864440 kubelet[2713]: I1101 00:26:40.864403 2713 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:26:40.865259 kubelet[2713]: I1101 00:26:40.865217 2713 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:26:40.865416 kubelet[2713]: I1101 00:26:40.865253 2713 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:26:40.865499 kubelet[2713]: I1101 00:26:40.865435 2713 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:26:40.865499 kubelet[2713]: I1101 00:26:40.865447 2713 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:26:40.865552 kubelet[2713]: I1101 00:26:40.865517 2713 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:26:40.865752 kubelet[2713]: I1101 00:26:40.865729 2713 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:26:40.865781 kubelet[2713]: I1101 00:26:40.865760 2713 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:26:40.865802 kubelet[2713]: I1101 00:26:40.865782 2713 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:26:40.865802 kubelet[2713]: I1101 00:26:40.865796 2713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:26:40.869069 kubelet[2713]: I1101 00:26:40.866546 2713 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:26:40.869069 kubelet[2713]: I1101 00:26:40.867261 2713 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:26:40.869069 kubelet[2713]: I1101 00:26:40.867900 2713 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:26:40.869069 kubelet[2713]: I1101 00:26:40.867930 2713 server.go:1287] "Started kubelet" Nov 1 00:26:40.869069 kubelet[2713]: I1101 00:26:40.868148 2713 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:26:40.869069 kubelet[2713]: I1101 00:26:40.868330 2713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:26:40.869069 kubelet[2713]: I1101 00:26:40.868828 2713 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:26:40.871968 kubelet[2713]: I1101 00:26:40.869661 2713 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:26:40.871968 kubelet[2713]: I1101 00:26:40.871771 2713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:26:40.879704 kubelet[2713]: I1101 00:26:40.879627 2713 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:26:40.880321 kubelet[2713]: E1101 00:26:40.880295 2713 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:26:40.881221 kubelet[2713]: I1101 00:26:40.881190 2713 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:26:40.881481 kubelet[2713]: I1101 00:26:40.881463 2713 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:26:40.881718 kubelet[2713]: I1101 00:26:40.881704 2713 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:26:40.885616 kubelet[2713]: I1101 00:26:40.885526 2713 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:26:40.885616 kubelet[2713]: I1101 00:26:40.885553 2713 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:26:40.885732 kubelet[2713]: I1101 00:26:40.885679 2713 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:26:40.893686 kubelet[2713]: I1101 00:26:40.892407 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:26:40.894284 kubelet[2713]: I1101 00:26:40.894246 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:26:40.894326 kubelet[2713]: I1101 00:26:40.894301 2713 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:26:40.894374 kubelet[2713]: I1101 00:26:40.894332 2713 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:26:40.894374 kubelet[2713]: I1101 00:26:40.894341 2713 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:26:40.894558 kubelet[2713]: E1101 00:26:40.894505 2713 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:26:40.937468 kubelet[2713]: I1101 00:26:40.937436 2713 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:26:40.937468 kubelet[2713]: I1101 00:26:40.937472 2713 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:26:40.937616 kubelet[2713]: I1101 00:26:40.937495 2713 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:26:40.937706 kubelet[2713]: I1101 00:26:40.937690 2713 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:26:40.937734 kubelet[2713]: I1101 00:26:40.937706 2713 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:26:40.937734 kubelet[2713]: I1101 00:26:40.937729 2713 policy_none.go:49] "None policy: Start" Nov 1 00:26:40.937776 kubelet[2713]: I1101 00:26:40.937743 2713 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:26:40.937776 kubelet[2713]: I1101 00:26:40.937757 2713 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:26:40.937888 kubelet[2713]: I1101 00:26:40.937875 2713 state_mem.go:75] "Updated machine memory state" Nov 1 00:26:40.939529 kubelet[2713]: I1101 00:26:40.939507 2713 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:26:40.939747 kubelet[2713]: I1101 00:26:40.939726 2713 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:26:40.939773 kubelet[2713]: I1101 00:26:40.939745 2713 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:26:40.940309 kubelet[2713]: I1101 00:26:40.940287 2713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:26:40.941867 kubelet[2713]: E1101 00:26:40.941785 2713 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:26:40.995909 kubelet[2713]: I1101 00:26:40.995846 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:40.996290 kubelet[2713]: I1101 00:26:40.996242 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:40.997887 kubelet[2713]: I1101 00:26:40.997853 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:41.019935 kubelet[2713]: E1101 00:26:41.019664 2713 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:41.046474 kubelet[2713]: I1101 00:26:41.046428 2713 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:41.084492 kubelet[2713]: I1101 00:26:41.084264 2713 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:26:41.084492 kubelet[2713]: I1101 00:26:41.084369 2713 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:26:41.183382 kubelet[2713]: I1101 00:26:41.183336 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:41.183515 kubelet[2713]: I1101 00:26:41.183389 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:41.183515 kubelet[2713]: I1101 00:26:41.183418 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:41.183515 kubelet[2713]: I1101 00:26:41.183437 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e74c89f2f0a02ffcae2e791cafcbd486-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e74c89f2f0a02ffcae2e791cafcbd486\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:41.183515 kubelet[2713]: I1101 00:26:41.183458 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e74c89f2f0a02ffcae2e791cafcbd486-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e74c89f2f0a02ffcae2e791cafcbd486\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:41.183515 kubelet[2713]: I1101 00:26:41.183479 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:41.183735 kubelet[2713]: I1101 00:26:41.183501 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e74c89f2f0a02ffcae2e791cafcbd486-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e74c89f2f0a02ffcae2e791cafcbd486\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:41.183735 kubelet[2713]: I1101 00:26:41.183527 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:41.183735 kubelet[2713]: I1101 00:26:41.183549 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:41.320046 kubelet[2713]: E1101 00:26:41.319904 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:41.320046 kubelet[2713]: E1101 00:26:41.319910 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:41.320046 kubelet[2713]: E1101 00:26:41.319951 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:41.866093 kubelet[2713]: I1101 00:26:41.866040 2713 apiserver.go:52] "Watching apiserver" Nov 1 00:26:41.882162 kubelet[2713]: I1101 00:26:41.882122 2713 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:26:41.904445 kubelet[2713]: I1101 00:26:41.904393 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:41.905097 kubelet[2713]: E1101 00:26:41.904691 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:41.907087 kubelet[2713]: I1101 00:26:41.905543 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:41.920140 kubelet[2713]: E1101 00:26:41.919196 2713 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:41.920140 kubelet[2713]: E1101 00:26:41.919460 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:41.921253 kubelet[2713]: E1101 00:26:41.920828 2713 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:41.921253 kubelet[2713]: E1101 00:26:41.920957 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:41.946339 kubelet[2713]: I1101 00:26:41.946251 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.946222098 podStartE2EDuration="946.222098ms" podCreationTimestamp="2025-11-01 00:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:26:41.944919302 +0000 UTC m=+1.134639811" watchObservedRunningTime="2025-11-01 00:26:41.946222098 +0000 UTC m=+1.135942617" Nov 1 00:26:41.966003 kubelet[2713]: I1101 00:26:41.965926 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.965893906 podStartE2EDuration="965.893906ms" podCreationTimestamp="2025-11-01 00:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:26:41.954970689 +0000 UTC m=+1.144691198" watchObservedRunningTime="2025-11-01 00:26:41.965893906 +0000 UTC m=+1.155614415" Nov 1 00:26:42.906003 kubelet[2713]: E1101 00:26:42.905948 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:42.906646 kubelet[2713]: E1101 00:26:42.906182 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:43.907723 kubelet[2713]: E1101 00:26:43.907687 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:44.732130 kubelet[2713]: I1101 00:26:44.732079 2713 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:26:44.732483 containerd[1592]: time="2025-11-01T00:26:44.732433864Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:26:44.732973 kubelet[2713]: I1101 00:26:44.732639 2713 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:26:45.581866 kubelet[2713]: I1101 00:26:45.580968 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.580892192 podStartE2EDuration="8.580892192s" podCreationTimestamp="2025-11-01 00:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:26:41.9658801 +0000 UTC m=+1.155600619" watchObservedRunningTime="2025-11-01 00:26:45.580892192 +0000 UTC m=+4.770612701" Nov 1 00:26:45.610803 kubelet[2713]: I1101 00:26:45.610733 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62mrn\" (UniqueName: \"kubernetes.io/projected/b93647fb-d1c3-4289-9981-b5d6e4637199-kube-api-access-62mrn\") pod \"kube-proxy-kvfxb\" (UID: \"b93647fb-d1c3-4289-9981-b5d6e4637199\") " pod="kube-system/kube-proxy-kvfxb" Nov 1 00:26:45.610803 kubelet[2713]: I1101 00:26:45.610790 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b93647fb-d1c3-4289-9981-b5d6e4637199-xtables-lock\") pod \"kube-proxy-kvfxb\" (UID: \"b93647fb-d1c3-4289-9981-b5d6e4637199\") " pod="kube-system/kube-proxy-kvfxb" Nov 1 00:26:45.610803 kubelet[2713]: I1101 00:26:45.610815 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b93647fb-d1c3-4289-9981-b5d6e4637199-kube-proxy\") pod \"kube-proxy-kvfxb\" (UID: \"b93647fb-d1c3-4289-9981-b5d6e4637199\") " pod="kube-system/kube-proxy-kvfxb" Nov 1 00:26:45.611010 kubelet[2713]: I1101 00:26:45.610835 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b93647fb-d1c3-4289-9981-b5d6e4637199-lib-modules\") pod \"kube-proxy-kvfxb\" (UID: \"b93647fb-d1c3-4289-9981-b5d6e4637199\") " pod="kube-system/kube-proxy-kvfxb" Nov 1 00:26:45.712013 kubelet[2713]: I1101 00:26:45.711955 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e281e4d-4a97-4008-af2e-0fa54ce5b228-var-lib-calico\") pod \"tigera-operator-7dcd859c48-tz9nq\" (UID: \"1e281e4d-4a97-4008-af2e-0fa54ce5b228\") " pod="tigera-operator/tigera-operator-7dcd859c48-tz9nq" Nov 1 00:26:45.712013 kubelet[2713]: I1101 00:26:45.712055 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzqn\" (UniqueName: \"kubernetes.io/projected/1e281e4d-4a97-4008-af2e-0fa54ce5b228-kube-api-access-fkzqn\") pod \"tigera-operator-7dcd859c48-tz9nq\" (UID: \"1e281e4d-4a97-4008-af2e-0fa54ce5b228\") " pod="tigera-operator/tigera-operator-7dcd859c48-tz9nq" Nov 1 00:26:45.892718 kubelet[2713]: E1101 00:26:45.892681 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:45.893395 containerd[1592]: time="2025-11-01T00:26:45.893339377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvfxb,Uid:b93647fb-d1c3-4289-9981-b5d6e4637199,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:45.923841 containerd[1592]: time="2025-11-01T00:26:45.923723199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:45.923841 containerd[1592]: time="2025-11-01T00:26:45.923797099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:45.923841 containerd[1592]: time="2025-11-01T00:26:45.923810424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:45.924099 containerd[1592]: time="2025-11-01T00:26:45.923907717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:45.968329 containerd[1592]: time="2025-11-01T00:26:45.968282867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvfxb,Uid:b93647fb-d1c3-4289-9981-b5d6e4637199,Namespace:kube-system,Attempt:0,} returns sandbox id \"3939ead3b620ced8d3fb5a3364ded0a72831c2640362e1f7d9194ece5ca07dcc\"" Nov 1 00:26:45.969254 kubelet[2713]: E1101 00:26:45.969226 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:45.971370 containerd[1592]: time="2025-11-01T00:26:45.971333619Z" level=info msg="CreateContainer within sandbox \"3939ead3b620ced8d3fb5a3364ded0a72831c2640362e1f7d9194ece5ca07dcc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:26:45.984402 containerd[1592]: time="2025-11-01T00:26:45.984358482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tz9nq,Uid:1e281e4d-4a97-4008-af2e-0fa54ce5b228,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:26:46.037812 containerd[1592]: time="2025-11-01T00:26:46.037470272Z" level=info msg="CreateContainer within sandbox \"3939ead3b620ced8d3fb5a3364ded0a72831c2640362e1f7d9194ece5ca07dcc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d3a984fec1d1784524fa56c983d7749cc636c6dd1605aa5a7b1f604873f2541\"" Nov 1 00:26:46.038444 containerd[1592]: time="2025-11-01T00:26:46.038421318Z" level=info msg="StartContainer for \"2d3a984fec1d1784524fa56c983d7749cc636c6dd1605aa5a7b1f604873f2541\"" Nov 1 00:26:46.059960 containerd[1592]: time="2025-11-01T00:26:46.059459458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:46.059960 containerd[1592]: time="2025-11-01T00:26:46.059542405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:46.059960 containerd[1592]: time="2025-11-01T00:26:46.059559126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:46.060158 containerd[1592]: time="2025-11-01T00:26:46.059910971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:46.107429 kubelet[2713]: E1101 00:26:46.107313 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:46.121722 containerd[1592]: time="2025-11-01T00:26:46.121593778Z" level=info msg="StartContainer for \"2d3a984fec1d1784524fa56c983d7749cc636c6dd1605aa5a7b1f604873f2541\" returns successfully" Nov 1 00:26:46.134600 containerd[1592]: time="2025-11-01T00:26:46.134533459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tz9nq,Uid:1e281e4d-4a97-4008-af2e-0fa54ce5b228,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bde71ec6e1ba58be29276a2098a5fb9497a97afb8e4cf27ea6d4f60f43f8ff5d\"" Nov 1 00:26:46.138467 containerd[1592]: time="2025-11-01T00:26:46.138348252Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:26:46.272098 kubelet[2713]: E1101 00:26:46.271835 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:46.916401 kubelet[2713]: E1101 00:26:46.916088 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:46.916401 kubelet[2713]: E1101 00:26:46.916192 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:46.916401 kubelet[2713]: E1101 00:26:46.916235 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:46.984187 kubelet[2713]: I1101 00:26:46.983843 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kvfxb" podStartSLOduration=1.983822078 podStartE2EDuration="1.983822078s" podCreationTimestamp="2025-11-01 00:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:26:46.983522432 +0000 UTC m=+6.173242951" watchObservedRunningTime="2025-11-01 00:26:46.983822078 +0000 UTC m=+6.173542587" Nov 1 00:26:47.917625 kubelet[2713]: E1101 00:26:47.917585 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:48.314792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546571299.mount: Deactivated successfully. Nov 1 00:26:49.968576 containerd[1592]: time="2025-11-01T00:26:49.968525850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:49.969550 containerd[1592]: time="2025-11-01T00:26:49.969520075Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:26:49.970986 containerd[1592]: time="2025-11-01T00:26:49.970938339Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:49.973375 containerd[1592]: time="2025-11-01T00:26:49.973331472Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:49.973995 containerd[1592]: time="2025-11-01T00:26:49.973941292Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.835548638s" Nov 1 00:26:49.973995 containerd[1592]: time="2025-11-01T00:26:49.973974885Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:26:49.976219 containerd[1592]: time="2025-11-01T00:26:49.976193700Z" level=info msg="CreateContainer within sandbox \"bde71ec6e1ba58be29276a2098a5fb9497a97afb8e4cf27ea6d4f60f43f8ff5d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:26:50.186499 containerd[1592]: time="2025-11-01T00:26:50.186430615Z" level=info msg="CreateContainer within sandbox \"bde71ec6e1ba58be29276a2098a5fb9497a97afb8e4cf27ea6d4f60f43f8ff5d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a1602f5de1f079a395ac00379a99cc0955e3788436516c063e9cb96801e7a89b\"" Nov 1 00:26:50.186861 containerd[1592]: time="2025-11-01T00:26:50.186839235Z" level=info msg="StartContainer for \"a1602f5de1f079a395ac00379a99cc0955e3788436516c063e9cb96801e7a89b\"" Nov 1 00:26:50.243631 containerd[1592]: time="2025-11-01T00:26:50.243476328Z" level=info msg="StartContainer for \"a1602f5de1f079a395ac00379a99cc0955e3788436516c063e9cb96801e7a89b\" returns successfully" Nov 1 00:26:50.873568 kubelet[2713]: E1101 00:26:50.873534 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:50.923990 kubelet[2713]: E1101 00:26:50.923924 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:50.936649 kubelet[2713]: I1101 00:26:50.936582 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-tz9nq" podStartSLOduration=2.097616154 podStartE2EDuration="5.936558637s" podCreationTimestamp="2025-11-01 00:26:45 +0000 UTC" firstStartedPulling="2025-11-01 00:26:46.136007251 +0000 UTC m=+5.325727760" lastFinishedPulling="2025-11-01 00:26:49.974949734 +0000 UTC m=+9.164670243" observedRunningTime="2025-11-01 00:26:50.936335987 +0000 UTC m=+10.126056496" watchObservedRunningTime="2025-11-01 00:26:50.936558637 +0000 UTC m=+10.126279146" Nov 1 00:26:56.225567 sudo[1782]: pam_unix(sudo:session): session closed for user root Nov 1 00:26:56.230153 sshd[1775]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:56.234003 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:26:56.241535 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:38290.service: Deactivated successfully. Nov 1 00:26:56.255612 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:26:56.261283 systemd-logind[1577]: Removed session 7. Nov 1 00:27:01.613812 kubelet[2713]: I1101 00:27:01.613748 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whdnr\" (UniqueName: \"kubernetes.io/projected/776281a2-2131-4678-827a-d808e61c265a-kube-api-access-whdnr\") pod \"calico-typha-6fd868b66f-pq5qj\" (UID: \"776281a2-2131-4678-827a-d808e61c265a\") " pod="calico-system/calico-typha-6fd868b66f-pq5qj" Nov 1 00:27:01.613812 kubelet[2713]: I1101 00:27:01.613794 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/776281a2-2131-4678-827a-d808e61c265a-typha-certs\") pod \"calico-typha-6fd868b66f-pq5qj\" (UID: \"776281a2-2131-4678-827a-d808e61c265a\") " pod="calico-system/calico-typha-6fd868b66f-pq5qj" Nov 1 00:27:01.613812 kubelet[2713]: I1101 00:27:01.613819 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/776281a2-2131-4678-827a-d808e61c265a-tigera-ca-bundle\") pod \"calico-typha-6fd868b66f-pq5qj\" (UID: \"776281a2-2131-4678-827a-d808e61c265a\") " pod="calico-system/calico-typha-6fd868b66f-pq5qj" Nov 1 00:27:01.815791 kubelet[2713]: I1101 00:27:01.815722 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cee13d65-d9ef-475c-87f7-1d136935c50f-var-lib-calico\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816400 kubelet[2713]: I1101 00:27:01.816031 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cee13d65-d9ef-475c-87f7-1d136935c50f-xtables-lock\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816400 kubelet[2713]: I1101 00:27:01.816059 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cee13d65-d9ef-475c-87f7-1d136935c50f-cni-log-dir\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816400 kubelet[2713]: I1101 00:27:01.816077 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cee13d65-d9ef-475c-87f7-1d136935c50f-flexvol-driver-host\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816400 kubelet[2713]: I1101 00:27:01.816101 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cee13d65-d9ef-475c-87f7-1d136935c50f-tigera-ca-bundle\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816400 kubelet[2713]: I1101 00:27:01.816153 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx2j6\" (UniqueName: \"kubernetes.io/projected/cee13d65-d9ef-475c-87f7-1d136935c50f-kube-api-access-mx2j6\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816582 kubelet[2713]: I1101 00:27:01.816175 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cee13d65-d9ef-475c-87f7-1d136935c50f-cni-bin-dir\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816582 kubelet[2713]: I1101 00:27:01.816194 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cee13d65-d9ef-475c-87f7-1d136935c50f-node-certs\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816582 kubelet[2713]: I1101 00:27:01.816226 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cee13d65-d9ef-475c-87f7-1d136935c50f-var-run-calico\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816582 kubelet[2713]: I1101 00:27:01.816246 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cee13d65-d9ef-475c-87f7-1d136935c50f-lib-modules\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816582 kubelet[2713]: I1101 00:27:01.816296 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cee13d65-d9ef-475c-87f7-1d136935c50f-cni-net-dir\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.816734 kubelet[2713]: I1101 00:27:01.816317 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cee13d65-d9ef-475c-87f7-1d136935c50f-policysync\") pod \"calico-node-rxll9\" (UID: \"cee13d65-d9ef-475c-87f7-1d136935c50f\") " pod="calico-system/calico-node-rxll9" Nov 1 00:27:01.916932 kubelet[2713]: E1101 00:27:01.916804 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:01.923732 kubelet[2713]: E1101 00:27:01.923698 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:01.924223 kubelet[2713]: W1101 00:27:01.924093 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:01.924269 containerd[1592]: time="2025-11-01T00:27:01.923970430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fd868b66f-pq5qj,Uid:776281a2-2131-4678-827a-d808e61c265a,Namespace:calico-system,Attempt:0,}" Nov 1 00:27:01.927055 kubelet[2713]: E1101 00:27:01.924508 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:01.930405 kubelet[2713]: E1101 00:27:01.930355 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:01.930405 kubelet[2713]: W1101 00:27:01.930379 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:01.930405 kubelet[2713]: E1101 00:27:01.930403 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:01.955663 containerd[1592]: time="2025-11-01T00:27:01.955551460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:01.955663 containerd[1592]: time="2025-11-01T00:27:01.955618847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:01.955663 containerd[1592]: time="2025-11-01T00:27:01.955634927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:01.956702 containerd[1592]: time="2025-11-01T00:27:01.956613837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:02.006078 kubelet[2713]: E1101 00:27:02.003832 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:02.018529 kubelet[2713]: E1101 00:27:02.018492 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.018529 kubelet[2713]: W1101 00:27:02.018524 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.018529 kubelet[2713]: E1101 00:27:02.018551 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.019225 kubelet[2713]: E1101 00:27:02.018860 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.019225 kubelet[2713]: W1101 00:27:02.018871 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.019225 kubelet[2713]: E1101 00:27:02.018882 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.019225 kubelet[2713]: E1101 00:27:02.019124 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.019225 kubelet[2713]: W1101 00:27:02.019134 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.019225 kubelet[2713]: E1101 00:27:02.019146 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.020255 kubelet[2713]: E1101 00:27:02.019433 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.020255 kubelet[2713]: W1101 00:27:02.019451 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.020255 kubelet[2713]: E1101 00:27:02.019462 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.020255 kubelet[2713]: E1101 00:27:02.019899 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.020255 kubelet[2713]: W1101 00:27:02.019910 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.020255 kubelet[2713]: E1101 00:27:02.019920 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.020255 kubelet[2713]: E1101 00:27:02.020145 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.020255 kubelet[2713]: W1101 00:27:02.020156 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.020255 kubelet[2713]: E1101 00:27:02.020166 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.021555 kubelet[2713]: E1101 00:27:02.020522 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.021555 kubelet[2713]: W1101 00:27:02.020534 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.021555 kubelet[2713]: E1101 00:27:02.020545 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.021555 kubelet[2713]: E1101 00:27:02.020782 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.021555 kubelet[2713]: W1101 00:27:02.020791 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.021555 kubelet[2713]: E1101 00:27:02.020802 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.021555 kubelet[2713]: E1101 00:27:02.021114 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.021555 kubelet[2713]: W1101 00:27:02.021125 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.021555 kubelet[2713]: E1101 00:27:02.021137 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.021555 kubelet[2713]: E1101 00:27:02.021362 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.021865 kubelet[2713]: W1101 00:27:02.021372 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.021865 kubelet[2713]: E1101 00:27:02.021382 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.021865 kubelet[2713]: E1101 00:27:02.021597 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.021865 kubelet[2713]: W1101 00:27:02.021607 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.021865 kubelet[2713]: E1101 00:27:02.021617 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.021865 kubelet[2713]: E1101 00:27:02.021833 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.021865 kubelet[2713]: W1101 00:27:02.021843 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.021865 kubelet[2713]: E1101 00:27:02.021853 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.022508 kubelet[2713]: E1101 00:27:02.022120 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.022508 kubelet[2713]: W1101 00:27:02.022131 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.022508 kubelet[2713]: E1101 00:27:02.022142 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.023620 kubelet[2713]: E1101 00:27:02.022721 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.023620 kubelet[2713]: W1101 00:27:02.022734 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.023620 kubelet[2713]: E1101 00:27:02.022746 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.023620 kubelet[2713]: E1101 00:27:02.023228 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.023620 kubelet[2713]: W1101 00:27:02.023240 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.023620 kubelet[2713]: E1101 00:27:02.023252 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.024906 kubelet[2713]: E1101 00:27:02.023845 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.024906 kubelet[2713]: W1101 00:27:02.023857 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.024906 kubelet[2713]: E1101 00:27:02.023871 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.024906 kubelet[2713]: E1101 00:27:02.024861 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.024906 kubelet[2713]: W1101 00:27:02.024872 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.024906 kubelet[2713]: E1101 00:27:02.024883 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.026563 kubelet[2713]: E1101 00:27:02.025146 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.026563 kubelet[2713]: W1101 00:27:02.025163 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.026563 kubelet[2713]: E1101 00:27:02.025174 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.026563 kubelet[2713]: E1101 00:27:02.025405 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.026563 kubelet[2713]: W1101 00:27:02.025415 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.026563 kubelet[2713]: E1101 00:27:02.025425 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.026563 kubelet[2713]: E1101 00:27:02.025643 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.026563 kubelet[2713]: W1101 00:27:02.025654 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.026563 kubelet[2713]: E1101 00:27:02.025666 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.026563 kubelet[2713]: E1101 00:27:02.026107 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.030361 kubelet[2713]: W1101 00:27:02.026119 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.030361 kubelet[2713]: E1101 00:27:02.026130 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.030361 kubelet[2713]: I1101 00:27:02.026158 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgnb4\" (UniqueName: \"kubernetes.io/projected/675112ea-20ac-4b20-b92c-b74dc58b95cd-kube-api-access-pgnb4\") pod \"csi-node-driver-l9nqw\" (UID: \"675112ea-20ac-4b20-b92c-b74dc58b95cd\") " pod="calico-system/csi-node-driver-l9nqw" Nov 1 00:27:02.030361 kubelet[2713]: E1101 00:27:02.026535 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.030361 kubelet[2713]: W1101 00:27:02.026549 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.030361 kubelet[2713]: E1101 00:27:02.026567 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.030361 kubelet[2713]: I1101 00:27:02.026586 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675112ea-20ac-4b20-b92c-b74dc58b95cd-kubelet-dir\") pod \"csi-node-driver-l9nqw\" (UID: \"675112ea-20ac-4b20-b92c-b74dc58b95cd\") " pod="calico-system/csi-node-driver-l9nqw" Nov 1 00:27:02.030361 kubelet[2713]: E1101 00:27:02.026849 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.030616 kubelet[2713]: W1101 00:27:02.026860 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.030616 kubelet[2713]: E1101 00:27:02.026884 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.030616 kubelet[2713]: I1101 00:27:02.026902 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/675112ea-20ac-4b20-b92c-b74dc58b95cd-registration-dir\") pod \"csi-node-driver-l9nqw\" (UID: \"675112ea-20ac-4b20-b92c-b74dc58b95cd\") " pod="calico-system/csi-node-driver-l9nqw" Nov 1 00:27:02.030616 kubelet[2713]: E1101 00:27:02.027228 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.030616 kubelet[2713]: W1101 00:27:02.027241 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.030616 kubelet[2713]: E1101 00:27:02.027266 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.030616 kubelet[2713]: I1101 00:27:02.027289 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/675112ea-20ac-4b20-b92c-b74dc58b95cd-varrun\") pod \"csi-node-driver-l9nqw\" (UID: \"675112ea-20ac-4b20-b92c-b74dc58b95cd\") " pod="calico-system/csi-node-driver-l9nqw" Nov 1 00:27:02.030616 kubelet[2713]: E1101 00:27:02.027625 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.030873 kubelet[2713]: W1101 00:27:02.027638 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.030873 kubelet[2713]: E1101 00:27:02.027665 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.030873 kubelet[2713]: E1101 00:27:02.027909 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.030873 kubelet[2713]: W1101 00:27:02.027919 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.030873 kubelet[2713]: E1101 00:27:02.027944 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.030873 kubelet[2713]: E1101 00:27:02.028521 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.030873 kubelet[2713]: W1101 00:27:02.028532 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.030873 kubelet[2713]: E1101 00:27:02.028559 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.030873 kubelet[2713]: E1101 00:27:02.028810 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.030873 kubelet[2713]: W1101 00:27:02.028820 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.033727 kubelet[2713]: E1101 00:27:02.028912 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.033727 kubelet[2713]: E1101 00:27:02.029143 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.033727 kubelet[2713]: W1101 00:27:02.029152 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.033727 kubelet[2713]: E1101 00:27:02.029282 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.033727 kubelet[2713]: E1101 00:27:02.029388 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.033727 kubelet[2713]: W1101 00:27:02.029398 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.033727 kubelet[2713]: E1101 00:27:02.029491 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.033727 kubelet[2713]: E1101 00:27:02.029612 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.033727 kubelet[2713]: W1101 00:27:02.029621 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.033727 kubelet[2713]: E1101 00:27:02.029715 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.033940 kubelet[2713]: I1101 00:27:02.029738 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/675112ea-20ac-4b20-b92c-b74dc58b95cd-socket-dir\") pod \"csi-node-driver-l9nqw\" (UID: \"675112ea-20ac-4b20-b92c-b74dc58b95cd\") " pod="calico-system/csi-node-driver-l9nqw" Nov 1 00:27:02.033940 kubelet[2713]: E1101 00:27:02.029840 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.033940 kubelet[2713]: W1101 00:27:02.029849 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.033940 kubelet[2713]: E1101 00:27:02.029859 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.033940 kubelet[2713]: E1101 00:27:02.030163 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.033940 kubelet[2713]: W1101 00:27:02.030176 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.033940 kubelet[2713]: E1101 00:27:02.030191 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.033940 kubelet[2713]: E1101 00:27:02.030422 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.033940 kubelet[2713]: W1101 00:27:02.030433 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.035101 kubelet[2713]: E1101 00:27:02.030443 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.035101 kubelet[2713]: E1101 00:27:02.030692 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.035101 kubelet[2713]: W1101 00:27:02.030704 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.035101 kubelet[2713]: E1101 00:27:02.030714 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.044887 containerd[1592]: time="2025-11-01T00:27:02.044832911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fd868b66f-pq5qj,Uid:776281a2-2131-4678-827a-d808e61c265a,Namespace:calico-system,Attempt:0,} returns sandbox id \"01dd88bd25c789119686049167b626c1002cc5e48cbd701ba063ebaeeae6ebac\"" Nov 1 00:27:02.046267 kubelet[2713]: E1101 00:27:02.045954 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:02.047220 containerd[1592]: time="2025-11-01T00:27:02.047169332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:27:02.113370 kubelet[2713]: E1101 00:27:02.113307 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:02.114096 containerd[1592]: time="2025-11-01T00:27:02.114049312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rxll9,Uid:cee13d65-d9ef-475c-87f7-1d136935c50f,Namespace:calico-system,Attempt:0,}" Nov 1 00:27:02.131543 kubelet[2713]: E1101 00:27:02.131486 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.131543 kubelet[2713]: W1101 00:27:02.131519 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.131543 kubelet[2713]: E1101 00:27:02.131548 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.131897 kubelet[2713]: E1101 00:27:02.131877 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.131897 kubelet[2713]: W1101 00:27:02.131893 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.131990 kubelet[2713]: E1101 00:27:02.131912 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.132399 kubelet[2713]: E1101 00:27:02.132363 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.132399 kubelet[2713]: W1101 00:27:02.132383 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.132562 kubelet[2713]: E1101 00:27:02.132403 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.132736 kubelet[2713]: E1101 00:27:02.132693 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.132736 kubelet[2713]: W1101 00:27:02.132727 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.132858 kubelet[2713]: E1101 00:27:02.132769 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.133124 kubelet[2713]: E1101 00:27:02.133103 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.133124 kubelet[2713]: W1101 00:27:02.133119 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.133124 kubelet[2713]: E1101 00:27:02.133137 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.133462 kubelet[2713]: E1101 00:27:02.133428 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.133462 kubelet[2713]: W1101 00:27:02.133447 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.133462 kubelet[2713]: E1101 00:27:02.133470 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.133761 kubelet[2713]: E1101 00:27:02.133740 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.133761 kubelet[2713]: W1101 00:27:02.133757 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.133989 kubelet[2713]: E1101 00:27:02.133808 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.134622 kubelet[2713]: E1101 00:27:02.134213 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.134622 kubelet[2713]: W1101 00:27:02.134232 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.134622 kubelet[2713]: E1101 00:27:02.134346 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.134622 kubelet[2713]: E1101 00:27:02.134507 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.134622 kubelet[2713]: W1101 00:27:02.134516 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.134622 kubelet[2713]: E1101 00:27:02.134619 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.134840 kubelet[2713]: E1101 00:27:02.134749 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.134840 kubelet[2713]: W1101 00:27:02.134760 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.134897 kubelet[2713]: E1101 00:27:02.134848 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.135229 kubelet[2713]: E1101 00:27:02.135198 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.135229 kubelet[2713]: W1101 00:27:02.135222 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.135333 kubelet[2713]: E1101 00:27:02.135246 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.135554 kubelet[2713]: E1101 00:27:02.135528 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.135554 kubelet[2713]: W1101 00:27:02.135545 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.135623 kubelet[2713]: E1101 00:27:02.135558 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.135851 kubelet[2713]: E1101 00:27:02.135824 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.135851 kubelet[2713]: W1101 00:27:02.135842 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.135969 kubelet[2713]: E1101 00:27:02.135941 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.136115 kubelet[2713]: E1101 00:27:02.136090 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.136115 kubelet[2713]: W1101 00:27:02.136109 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.136269 kubelet[2713]: E1101 00:27:02.136243 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.136402 kubelet[2713]: E1101 00:27:02.136382 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.136402 kubelet[2713]: W1101 00:27:02.136397 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.136631 kubelet[2713]: E1101 00:27:02.136604 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.136716 kubelet[2713]: E1101 00:27:02.136698 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.136716 kubelet[2713]: W1101 00:27:02.136712 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.136817 kubelet[2713]: E1101 00:27:02.136797 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.137959 kubelet[2713]: E1101 00:27:02.137932 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.137959 kubelet[2713]: W1101 00:27:02.137949 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.138084 kubelet[2713]: E1101 00:27:02.137974 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.138449 kubelet[2713]: E1101 00:27:02.138256 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.138449 kubelet[2713]: W1101 00:27:02.138272 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.138449 kubelet[2713]: E1101 00:27:02.138321 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.138641 kubelet[2713]: E1101 00:27:02.138619 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.138641 kubelet[2713]: W1101 00:27:02.138635 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.139160 kubelet[2713]: E1101 00:27:02.138721 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.139160 kubelet[2713]: E1101 00:27:02.138970 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.139160 kubelet[2713]: W1101 00:27:02.138981 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.139160 kubelet[2713]: E1101 00:27:02.139097 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.139415 kubelet[2713]: E1101 00:27:02.139373 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.139415 kubelet[2713]: W1101 00:27:02.139393 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.139496 kubelet[2713]: E1101 00:27:02.139424 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.139759 kubelet[2713]: E1101 00:27:02.139731 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.139759 kubelet[2713]: W1101 00:27:02.139752 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.139831 kubelet[2713]: E1101 00:27:02.139783 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.140498 kubelet[2713]: E1101 00:27:02.140477 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.140696 kubelet[2713]: W1101 00:27:02.140609 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.140696 kubelet[2713]: E1101 00:27:02.140637 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.141477 kubelet[2713]: E1101 00:27:02.141313 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.141477 kubelet[2713]: W1101 00:27:02.141327 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.141477 kubelet[2713]: E1101 00:27:02.141343 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.141747 kubelet[2713]: E1101 00:27:02.141733 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.141837 kubelet[2713]: W1101 00:27:02.141822 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.141963 kubelet[2713]: E1101 00:27:02.141887 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.146337 containerd[1592]: time="2025-11-01T00:27:02.146125915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:02.146337 containerd[1592]: time="2025-11-01T00:27:02.146203791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:02.146337 containerd[1592]: time="2025-11-01T00:27:02.146235039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:02.146539 containerd[1592]: time="2025-11-01T00:27:02.146354354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:02.151220 kubelet[2713]: E1101 00:27:02.151177 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:02.151326 kubelet[2713]: W1101 00:27:02.151201 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:02.151326 kubelet[2713]: E1101 00:27:02.151252 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:02.196400 containerd[1592]: time="2025-11-01T00:27:02.196235623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rxll9,Uid:cee13d65-d9ef-475c-87f7-1d136935c50f,Namespace:calico-system,Attempt:0,} returns sandbox id \"63d3d1a173c1f1f7819527291d2dc4797f0d744f303a72102bc0f522f9fc4d6d\"" Nov 1 00:27:02.197170 kubelet[2713]: E1101 00:27:02.197126 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:03.894863 kubelet[2713]: E1101 00:27:03.894810 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:04.755626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999308929.mount: Deactivated successfully. Nov 1 00:27:05.316104 containerd[1592]: time="2025-11-01T00:27:05.316001703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:05.318043 containerd[1592]: time="2025-11-01T00:27:05.317656192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:27:05.319071 containerd[1592]: time="2025-11-01T00:27:05.318993494Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:05.321893 containerd[1592]: time="2025-11-01T00:27:05.321838930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:05.322428 containerd[1592]: time="2025-11-01T00:27:05.322386148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.275182711s" Nov 1 00:27:05.322428 containerd[1592]: time="2025-11-01T00:27:05.322415504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:27:05.331358 containerd[1592]: time="2025-11-01T00:27:05.330253910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:27:05.353069 containerd[1592]: time="2025-11-01T00:27:05.353007571Z" level=info msg="CreateContainer within sandbox \"01dd88bd25c789119686049167b626c1002cc5e48cbd701ba063ebaeeae6ebac\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:27:05.368545 containerd[1592]: time="2025-11-01T00:27:05.368486092Z" level=info msg="CreateContainer within sandbox \"01dd88bd25c789119686049167b626c1002cc5e48cbd701ba063ebaeeae6ebac\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3e46ea550ff080d02a54099913147461b3b28e887eb0639b48ec47d2491a97c2\"" Nov 1 00:27:05.374051 containerd[1592]: time="2025-11-01T00:27:05.372038446Z" level=info msg="StartContainer for \"3e46ea550ff080d02a54099913147461b3b28e887eb0639b48ec47d2491a97c2\"" Nov 1 00:27:05.453092 containerd[1592]: time="2025-11-01T00:27:05.453042308Z" level=info msg="StartContainer for \"3e46ea550ff080d02a54099913147461b3b28e887eb0639b48ec47d2491a97c2\" returns successfully" Nov 1 00:27:05.909311 kubelet[2713]: E1101 00:27:05.909242 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:05.969225 kubelet[2713]: E1101 00:27:05.969173 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:05.994500 kubelet[2713]: I1101 00:27:05.994428 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fd868b66f-pq5qj" podStartSLOduration=1.71002641 podStartE2EDuration="4.993278844s" podCreationTimestamp="2025-11-01 00:27:01 +0000 UTC" firstStartedPulling="2025-11-01 00:27:02.046778789 +0000 UTC m=+21.236499298" lastFinishedPulling="2025-11-01 00:27:05.330031213 +0000 UTC m=+24.519751732" observedRunningTime="2025-11-01 00:27:05.990566398 +0000 UTC m=+25.180286917" watchObservedRunningTime="2025-11-01 00:27:05.993278844 +0000 UTC m=+25.182999353" Nov 1 00:27:06.050607 kubelet[2713]: E1101 00:27:06.050538 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.050607 kubelet[2713]: W1101 00:27:06.050571 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.053908 kubelet[2713]: E1101 00:27:06.053793 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.054084 kubelet[2713]: E1101 00:27:06.054067 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.054084 kubelet[2713]: W1101 00:27:06.054080 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.054159 kubelet[2713]: E1101 00:27:06.054092 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.054326 kubelet[2713]: E1101 00:27:06.054311 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.054326 kubelet[2713]: W1101 00:27:06.054323 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.054420 kubelet[2713]: E1101 00:27:06.054331 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.054594 kubelet[2713]: E1101 00:27:06.054577 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.054635 kubelet[2713]: W1101 00:27:06.054596 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.054635 kubelet[2713]: E1101 00:27:06.054609 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.054873 kubelet[2713]: E1101 00:27:06.054858 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.054911 kubelet[2713]: W1101 00:27:06.054873 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.054911 kubelet[2713]: E1101 00:27:06.054885 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.055734 kubelet[2713]: E1101 00:27:06.055517 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.055734 kubelet[2713]: W1101 00:27:06.055549 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.055734 kubelet[2713]: E1101 00:27:06.055564 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.055830 kubelet[2713]: E1101 00:27:06.055800 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.055830 kubelet[2713]: W1101 00:27:06.055811 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.055830 kubelet[2713]: E1101 00:27:06.055824 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.056156 kubelet[2713]: E1101 00:27:06.056106 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.056156 kubelet[2713]: W1101 00:27:06.056132 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.056156 kubelet[2713]: E1101 00:27:06.056144 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.056702 kubelet[2713]: E1101 00:27:06.056674 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.056702 kubelet[2713]: W1101 00:27:06.056700 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.056771 kubelet[2713]: E1101 00:27:06.056736 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.056962 kubelet[2713]: E1101 00:27:06.056938 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.056962 kubelet[2713]: W1101 00:27:06.056958 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.057077 kubelet[2713]: E1101 00:27:06.056969 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.057254 kubelet[2713]: E1101 00:27:06.057225 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.057321 kubelet[2713]: W1101 00:27:06.057260 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.057321 kubelet[2713]: E1101 00:27:06.057288 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.057560 kubelet[2713]: E1101 00:27:06.057542 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.057560 kubelet[2713]: W1101 00:27:06.057554 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.057614 kubelet[2713]: E1101 00:27:06.057564 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.057817 kubelet[2713]: E1101 00:27:06.057796 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.057817 kubelet[2713]: W1101 00:27:06.057812 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.057878 kubelet[2713]: E1101 00:27:06.057823 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.058079 kubelet[2713]: E1101 00:27:06.058063 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.058079 kubelet[2713]: W1101 00:27:06.058074 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.058150 kubelet[2713]: E1101 00:27:06.058084 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.058338 kubelet[2713]: E1101 00:27:06.058315 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.058338 kubelet[2713]: W1101 00:27:06.058331 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.058422 kubelet[2713]: E1101 00:27:06.058345 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.065326 kubelet[2713]: E1101 00:27:06.065284 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.065326 kubelet[2713]: W1101 00:27:06.065314 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.065404 kubelet[2713]: E1101 00:27:06.065341 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.065630 kubelet[2713]: E1101 00:27:06.065611 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.065630 kubelet[2713]: W1101 00:27:06.065626 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.065702 kubelet[2713]: E1101 00:27:06.065643 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.065946 kubelet[2713]: E1101 00:27:06.065914 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.065946 kubelet[2713]: W1101 00:27:06.065934 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.065946 kubelet[2713]: E1101 00:27:06.065954 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.066276 kubelet[2713]: E1101 00:27:06.066253 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.066276 kubelet[2713]: W1101 00:27:06.066266 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.066349 kubelet[2713]: E1101 00:27:06.066286 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.066552 kubelet[2713]: E1101 00:27:06.066533 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.066552 kubelet[2713]: W1101 00:27:06.066550 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.066628 kubelet[2713]: E1101 00:27:06.066570 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.066888 kubelet[2713]: E1101 00:27:06.066868 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.066888 kubelet[2713]: W1101 00:27:06.066884 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.066997 kubelet[2713]: E1101 00:27:06.066920 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.067353 kubelet[2713]: E1101 00:27:06.067210 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.067353 kubelet[2713]: W1101 00:27:06.067226 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.067353 kubelet[2713]: E1101 00:27:06.067251 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.067805 kubelet[2713]: E1101 00:27:06.067783 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.067805 kubelet[2713]: W1101 00:27:06.067799 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.067949 kubelet[2713]: E1101 00:27:06.067840 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.068103 kubelet[2713]: E1101 00:27:06.068081 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.068103 kubelet[2713]: W1101 00:27:06.068098 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.068183 kubelet[2713]: E1101 00:27:06.068128 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.068408 kubelet[2713]: E1101 00:27:06.068388 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.068408 kubelet[2713]: W1101 00:27:06.068403 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.068508 kubelet[2713]: E1101 00:27:06.068419 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.068709 kubelet[2713]: E1101 00:27:06.068688 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.068709 kubelet[2713]: W1101 00:27:06.068705 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.068811 kubelet[2713]: E1101 00:27:06.068725 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.069694 kubelet[2713]: E1101 00:27:06.069668 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.069694 kubelet[2713]: W1101 00:27:06.069685 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.070222 kubelet[2713]: E1101 00:27:06.069728 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.070222 kubelet[2713]: E1101 00:27:06.069933 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.070222 kubelet[2713]: W1101 00:27:06.069941 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.070222 kubelet[2713]: E1101 00:27:06.069957 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.070222 kubelet[2713]: E1101 00:27:06.070205 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.070222 kubelet[2713]: W1101 00:27:06.070215 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.070404 kubelet[2713]: E1101 00:27:06.070233 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.070528 kubelet[2713]: E1101 00:27:06.070511 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.070528 kubelet[2713]: W1101 00:27:06.070525 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.070599 kubelet[2713]: E1101 00:27:06.070541 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.070902 kubelet[2713]: E1101 00:27:06.070883 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.070902 kubelet[2713]: W1101 00:27:06.070897 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.070989 kubelet[2713]: E1101 00:27:06.070913 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.071257 kubelet[2713]: E1101 00:27:06.071241 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.071257 kubelet[2713]: W1101 00:27:06.071253 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.071340 kubelet[2713]: E1101 00:27:06.071263 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.071561 kubelet[2713]: E1101 00:27:06.071539 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:06.071561 kubelet[2713]: W1101 00:27:06.071554 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:06.071636 kubelet[2713]: E1101 00:27:06.071565 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:06.971353 kubelet[2713]: I1101 00:27:06.971304 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:27:06.973076 kubelet[2713]: E1101 00:27:06.973053 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:07.064295 kubelet[2713]: E1101 00:27:07.064259 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.064295 kubelet[2713]: W1101 00:27:07.064284 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.064482 kubelet[2713]: E1101 00:27:07.064310 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.064567 kubelet[2713]: E1101 00:27:07.064555 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.064604 kubelet[2713]: W1101 00:27:07.064567 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.064604 kubelet[2713]: E1101 00:27:07.064578 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.064808 kubelet[2713]: E1101 00:27:07.064792 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.064808 kubelet[2713]: W1101 00:27:07.064805 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.064899 kubelet[2713]: E1101 00:27:07.064817 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.065044 kubelet[2713]: E1101 00:27:07.065007 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.065104 kubelet[2713]: W1101 00:27:07.065055 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.065104 kubelet[2713]: E1101 00:27:07.065068 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.065316 kubelet[2713]: E1101 00:27:07.065300 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.065373 kubelet[2713]: W1101 00:27:07.065317 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.065373 kubelet[2713]: E1101 00:27:07.065352 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.065612 kubelet[2713]: E1101 00:27:07.065584 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.065664 kubelet[2713]: W1101 00:27:07.065618 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.065664 kubelet[2713]: E1101 00:27:07.065631 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.065850 kubelet[2713]: E1101 00:27:07.065830 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.065850 kubelet[2713]: W1101 00:27:07.065844 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.065934 kubelet[2713]: E1101 00:27:07.065857 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.066203 kubelet[2713]: E1101 00:27:07.066185 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.066203 kubelet[2713]: W1101 00:27:07.066200 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.066279 kubelet[2713]: E1101 00:27:07.066216 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.066598 kubelet[2713]: E1101 00:27:07.066564 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.066598 kubelet[2713]: W1101 00:27:07.066581 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.066598 kubelet[2713]: E1101 00:27:07.066596 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.066906 kubelet[2713]: E1101 00:27:07.066887 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.066906 kubelet[2713]: W1101 00:27:07.066901 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.066989 kubelet[2713]: E1101 00:27:07.066912 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.067217 kubelet[2713]: E1101 00:27:07.067180 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.067217 kubelet[2713]: W1101 00:27:07.067197 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.067217 kubelet[2713]: E1101 00:27:07.067208 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.067437 kubelet[2713]: E1101 00:27:07.067404 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.067437 kubelet[2713]: W1101 00:27:07.067420 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.067437 kubelet[2713]: E1101 00:27:07.067433 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.067667 kubelet[2713]: E1101 00:27:07.067648 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.067667 kubelet[2713]: W1101 00:27:07.067661 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.067748 kubelet[2713]: E1101 00:27:07.067672 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.068008 kubelet[2713]: E1101 00:27:07.067965 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.068109 kubelet[2713]: W1101 00:27:07.068013 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.068109 kubelet[2713]: E1101 00:27:07.068077 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.068427 kubelet[2713]: E1101 00:27:07.068401 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.068427 kubelet[2713]: W1101 00:27:07.068416 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.068427 kubelet[2713]: E1101 00:27:07.068426 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.074054 kubelet[2713]: E1101 00:27:07.074001 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.074054 kubelet[2713]: W1101 00:27:07.074052 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.074187 kubelet[2713]: E1101 00:27:07.074080 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.074485 kubelet[2713]: E1101 00:27:07.074458 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.074485 kubelet[2713]: W1101 00:27:07.074480 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.074585 kubelet[2713]: E1101 00:27:07.074505 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.074627 containerd[1592]: time="2025-11-01T00:27:07.074444335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:07.075237 kubelet[2713]: E1101 00:27:07.074800 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.075237 kubelet[2713]: W1101 00:27:07.074813 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.075237 kubelet[2713]: E1101 00:27:07.074835 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.075237 kubelet[2713]: E1101 00:27:07.075178 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.075237 kubelet[2713]: W1101 00:27:07.075189 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.075237 kubelet[2713]: E1101 00:27:07.075204 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.075486 kubelet[2713]: E1101 00:27:07.075462 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.075486 kubelet[2713]: W1101 00:27:07.075483 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.075586 kubelet[2713]: E1101 00:27:07.075500 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.075799 kubelet[2713]: E1101 00:27:07.075772 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.075799 kubelet[2713]: W1101 00:27:07.075785 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.075799 kubelet[2713]: E1101 00:27:07.075795 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.076064 containerd[1592]: time="2025-11-01T00:27:07.076007741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:27:07.076195 kubelet[2713]: E1101 00:27:07.076176 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.076195 kubelet[2713]: W1101 00:27:07.076190 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.076305 kubelet[2713]: E1101 00:27:07.076235 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.076750 kubelet[2713]: E1101 00:27:07.076713 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.076750 kubelet[2713]: W1101 00:27:07.076729 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.076844 kubelet[2713]: E1101 00:27:07.076775 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.076992 kubelet[2713]: E1101 00:27:07.076974 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.076992 kubelet[2713]: W1101 00:27:07.076988 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.077115 kubelet[2713]: E1101 00:27:07.077008 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.077450 kubelet[2713]: E1101 00:27:07.077374 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.077450 kubelet[2713]: W1101 00:27:07.077392 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.077450 kubelet[2713]: E1101 00:27:07.077412 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.077682 kubelet[2713]: E1101 00:27:07.077662 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.077682 kubelet[2713]: W1101 00:27:07.077677 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.077682 kubelet[2713]: E1101 00:27:07.077696 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.078056 kubelet[2713]: E1101 00:27:07.078037 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.078159 kubelet[2713]: W1101 00:27:07.078127 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.078159 kubelet[2713]: E1101 00:27:07.078153 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.078527 kubelet[2713]: E1101 00:27:07.078504 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.078527 kubelet[2713]: W1101 00:27:07.078523 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.078627 kubelet[2713]: E1101 00:27:07.078543 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.078808 kubelet[2713]: E1101 00:27:07.078789 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.078808 kubelet[2713]: W1101 00:27:07.078804 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.078891 kubelet[2713]: E1101 00:27:07.078819 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.079104 kubelet[2713]: E1101 00:27:07.079077 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.079104 kubelet[2713]: W1101 00:27:07.079101 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.079214 kubelet[2713]: E1101 00:27:07.079116 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.079444 kubelet[2713]: E1101 00:27:07.079424 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.079444 kubelet[2713]: W1101 00:27:07.079437 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.079526 kubelet[2713]: E1101 00:27:07.079454 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.079732 kubelet[2713]: E1101 00:27:07.079712 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.079793 kubelet[2713]: W1101 00:27:07.079743 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.079793 kubelet[2713]: E1101 00:27:07.079764 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.080238 kubelet[2713]: E1101 00:27:07.080014 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:27:07.080238 kubelet[2713]: W1101 00:27:07.080051 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:27:07.080238 kubelet[2713]: E1101 00:27:07.080070 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:27:07.081546 containerd[1592]: time="2025-11-01T00:27:07.081494758Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:07.084401 containerd[1592]: time="2025-11-01T00:27:07.084358878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:07.085158 containerd[1592]: time="2025-11-01T00:27:07.085111241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.754815893s" Nov 1 00:27:07.085158 containerd[1592]: time="2025-11-01T00:27:07.085150335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:27:07.087488 containerd[1592]: time="2025-11-01T00:27:07.087458480Z" level=info msg="CreateContainer within sandbox \"63d3d1a173c1f1f7819527291d2dc4797f0d744f303a72102bc0f522f9fc4d6d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:27:07.117879 containerd[1592]: time="2025-11-01T00:27:07.117820310Z" level=info msg="CreateContainer within sandbox \"63d3d1a173c1f1f7819527291d2dc4797f0d744f303a72102bc0f522f9fc4d6d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d7045e7bc307d567af4f63e3ad67073e3b8f39d25874dea124e020178480081c\"" Nov 1 00:27:07.118513 containerd[1592]: time="2025-11-01T00:27:07.118486141Z" level=info msg="StartContainer for \"d7045e7bc307d567af4f63e3ad67073e3b8f39d25874dea124e020178480081c\"" Nov 1 00:27:07.187805 containerd[1592]: time="2025-11-01T00:27:07.187358548Z" level=info msg="StartContainer for \"d7045e7bc307d567af4f63e3ad67073e3b8f39d25874dea124e020178480081c\" returns successfully" Nov 1 00:27:07.224467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7045e7bc307d567af4f63e3ad67073e3b8f39d25874dea124e020178480081c-rootfs.mount: Deactivated successfully. Nov 1 00:27:07.263379 containerd[1592]: time="2025-11-01T00:27:07.263293741Z" level=info msg="shim disconnected" id=d7045e7bc307d567af4f63e3ad67073e3b8f39d25874dea124e020178480081c namespace=k8s.io Nov 1 00:27:07.263379 containerd[1592]: time="2025-11-01T00:27:07.263363663Z" level=warning msg="cleaning up after shim disconnected" id=d7045e7bc307d567af4f63e3ad67073e3b8f39d25874dea124e020178480081c namespace=k8s.io Nov 1 00:27:07.263379 containerd[1592]: time="2025-11-01T00:27:07.263375786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:27:07.895361 kubelet[2713]: E1101 00:27:07.895273 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:07.974353 kubelet[2713]: E1101 00:27:07.974320 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:07.976423 containerd[1592]: time="2025-11-01T00:27:07.976298348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:27:09.894819 kubelet[2713]: E1101 00:27:09.894772 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:11.262388 containerd[1592]: time="2025-11-01T00:27:11.262250699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:11.264304 containerd[1592]: time="2025-11-01T00:27:11.264216840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:27:11.266739 containerd[1592]: time="2025-11-01T00:27:11.266700474Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:11.269972 containerd[1592]: time="2025-11-01T00:27:11.269903398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:11.270643 containerd[1592]: time="2025-11-01T00:27:11.270608782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.294263716s" Nov 1 00:27:11.270719 containerd[1592]: time="2025-11-01T00:27:11.270647765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:27:11.273742 containerd[1592]: time="2025-11-01T00:27:11.273692342Z" level=info msg="CreateContainer within sandbox \"63d3d1a173c1f1f7819527291d2dc4797f0d744f303a72102bc0f522f9fc4d6d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:27:11.314744 containerd[1592]: time="2025-11-01T00:27:11.314677705Z" level=info msg="CreateContainer within sandbox \"63d3d1a173c1f1f7819527291d2dc4797f0d744f303a72102bc0f522f9fc4d6d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"452c7ba9caded71bfc7378c1096dc007f19db13477cf69814fe3525945c1eb9b\"" Nov 1 00:27:11.315472 containerd[1592]: time="2025-11-01T00:27:11.315429427Z" level=info msg="StartContainer for \"452c7ba9caded71bfc7378c1096dc007f19db13477cf69814fe3525945c1eb9b\"" Nov 1 00:27:11.400208 containerd[1592]: time="2025-11-01T00:27:11.400147813Z" level=info msg="StartContainer for \"452c7ba9caded71bfc7378c1096dc007f19db13477cf69814fe3525945c1eb9b\" returns successfully" Nov 1 00:27:11.895762 kubelet[2713]: E1101 00:27:11.895692 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:11.983980 kubelet[2713]: E1101 00:27:11.983925 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:12.985646 kubelet[2713]: E1101 00:27:12.985593 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:13.726962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-452c7ba9caded71bfc7378c1096dc007f19db13477cf69814fe3525945c1eb9b-rootfs.mount: Deactivated successfully. Nov 1 00:27:13.730477 containerd[1592]: time="2025-11-01T00:27:13.730407129Z" level=info msg="shim disconnected" id=452c7ba9caded71bfc7378c1096dc007f19db13477cf69814fe3525945c1eb9b namespace=k8s.io Nov 1 00:27:13.730477 containerd[1592]: time="2025-11-01T00:27:13.730466080Z" level=warning msg="cleaning up after shim disconnected" id=452c7ba9caded71bfc7378c1096dc007f19db13477cf69814fe3525945c1eb9b namespace=k8s.io Nov 1 00:27:13.730477 containerd[1592]: time="2025-11-01T00:27:13.730475688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:27:13.781619 kubelet[2713]: I1101 00:27:13.781568 2713 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:27:13.825142 kubelet[2713]: I1101 00:27:13.825001 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3bbd1b7-2cec-41ab-97aa-54499c93466d-config-volume\") pod \"coredns-668d6bf9bc-kjzn2\" (UID: \"e3bbd1b7-2cec-41ab-97aa-54499c93466d\") " pod="kube-system/coredns-668d6bf9bc-kjzn2" Nov 1 00:27:13.828324 kubelet[2713]: I1101 00:27:13.825597 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq9q5\" (UniqueName: \"kubernetes.io/projected/e3bbd1b7-2cec-41ab-97aa-54499c93466d-kube-api-access-qq9q5\") pod \"coredns-668d6bf9bc-kjzn2\" (UID: \"e3bbd1b7-2cec-41ab-97aa-54499c93466d\") " pod="kube-system/coredns-668d6bf9bc-kjzn2" Nov 1 00:27:13.828324 kubelet[2713]: I1101 00:27:13.825642 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw949\" (UniqueName: \"kubernetes.io/projected/8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8-kube-api-access-fw949\") pod \"calico-apiserver-65fc45bf6-gxk8k\" (UID: \"8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8\") " pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" Nov 1 00:27:13.828324 kubelet[2713]: I1101 00:27:13.825668 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8-calico-apiserver-certs\") pod \"calico-apiserver-65fc45bf6-gxk8k\" (UID: \"8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8\") " pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" Nov 1 00:27:13.900238 containerd[1592]: time="2025-11-01T00:27:13.900188607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9nqw,Uid:675112ea-20ac-4b20-b92c-b74dc58b95cd,Namespace:calico-system,Attempt:0,}" Nov 1 00:27:13.926133 kubelet[2713]: I1101 00:27:13.926070 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a7329f0c-4569-4192-ab57-1ba0d9bc5c3f-calico-apiserver-certs\") pod \"calico-apiserver-65fc45bf6-dz8ms\" (UID: \"a7329f0c-4569-4192-ab57-1ba0d9bc5c3f\") " pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" Nov 1 00:27:13.926133 kubelet[2713]: I1101 00:27:13.926115 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnng\" (UniqueName: \"kubernetes.io/projected/4d8b0a34-66bd-4c22-a438-b5e5354489a4-kube-api-access-xnnng\") pod \"calico-apiserver-5fc976b46c-8vsw4\" (UID: \"4d8b0a34-66bd-4c22-a438-b5e5354489a4\") " pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" Nov 1 00:27:13.926133 kubelet[2713]: I1101 00:27:13.926132 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d8b0a34-66bd-4c22-a438-b5e5354489a4-calico-apiserver-certs\") pod \"calico-apiserver-5fc976b46c-8vsw4\" (UID: \"4d8b0a34-66bd-4c22-a438-b5e5354489a4\") " pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" Nov 1 00:27:13.926381 kubelet[2713]: I1101 00:27:13.926148 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72sqr\" (UniqueName: \"kubernetes.io/projected/57074b9c-532a-474a-a4c5-559d3798e3ac-kube-api-access-72sqr\") pod \"whisker-7f7f7788f-4djwn\" (UID: \"57074b9c-532a-474a-a4c5-559d3798e3ac\") " pod="calico-system/whisker-7f7f7788f-4djwn" Nov 1 00:27:13.926381 kubelet[2713]: I1101 00:27:13.926186 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57074b9c-532a-474a-a4c5-559d3798e3ac-whisker-backend-key-pair\") pod \"whisker-7f7f7788f-4djwn\" (UID: \"57074b9c-532a-474a-a4c5-559d3798e3ac\") " pod="calico-system/whisker-7f7f7788f-4djwn" Nov 1 00:27:13.926381 kubelet[2713]: I1101 00:27:13.926200 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41180a49-a14f-492f-9746-dfd093b11440-config\") pod \"goldmane-666569f655-ksf9n\" (UID: \"41180a49-a14f-492f-9746-dfd093b11440\") " pod="calico-system/goldmane-666569f655-ksf9n" Nov 1 00:27:13.926381 kubelet[2713]: I1101 00:27:13.926228 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/41180a49-a14f-492f-9746-dfd093b11440-goldmane-key-pair\") pod \"goldmane-666569f655-ksf9n\" (UID: \"41180a49-a14f-492f-9746-dfd093b11440\") " pod="calico-system/goldmane-666569f655-ksf9n" Nov 1 00:27:13.926381 kubelet[2713]: I1101 00:27:13.926250 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd83f5e8-3d82-42fe-a0b0-5807c8a2598f-config-volume\") pod \"coredns-668d6bf9bc-mfl96\" (UID: \"bd83f5e8-3d82-42fe-a0b0-5807c8a2598f\") " pod="kube-system/coredns-668d6bf9bc-mfl96" Nov 1 00:27:13.926553 kubelet[2713]: I1101 00:27:13.926289 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57074b9c-532a-474a-a4c5-559d3798e3ac-whisker-ca-bundle\") pod \"whisker-7f7f7788f-4djwn\" (UID: \"57074b9c-532a-474a-a4c5-559d3798e3ac\") " pod="calico-system/whisker-7f7f7788f-4djwn" Nov 1 00:27:13.926553 kubelet[2713]: I1101 00:27:13.926305 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41180a49-a14f-492f-9746-dfd093b11440-goldmane-ca-bundle\") pod \"goldmane-666569f655-ksf9n\" (UID: \"41180a49-a14f-492f-9746-dfd093b11440\") " pod="calico-system/goldmane-666569f655-ksf9n" Nov 1 00:27:13.926553 kubelet[2713]: I1101 00:27:13.926320 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdsf\" (UniqueName: \"kubernetes.io/projected/bd83f5e8-3d82-42fe-a0b0-5807c8a2598f-kube-api-access-qcdsf\") pod \"coredns-668d6bf9bc-mfl96\" (UID: \"bd83f5e8-3d82-42fe-a0b0-5807c8a2598f\") " pod="kube-system/coredns-668d6bf9bc-mfl96" Nov 1 00:27:13.926553 kubelet[2713]: I1101 00:27:13.926334 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvlq9\" (UniqueName: \"kubernetes.io/projected/a7329f0c-4569-4192-ab57-1ba0d9bc5c3f-kube-api-access-xvlq9\") pod \"calico-apiserver-65fc45bf6-dz8ms\" (UID: \"a7329f0c-4569-4192-ab57-1ba0d9bc5c3f\") " pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" Nov 1 00:27:13.926553 kubelet[2713]: I1101 00:27:13.926353 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpqjq\" (UniqueName: \"kubernetes.io/projected/41180a49-a14f-492f-9746-dfd093b11440-kube-api-access-jpqjq\") pod \"goldmane-666569f655-ksf9n\" (UID: \"41180a49-a14f-492f-9746-dfd093b11440\") " pod="calico-system/goldmane-666569f655-ksf9n" Nov 1 00:27:13.926720 kubelet[2713]: I1101 00:27:13.926368 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65680b46-920b-40e7-93fd-698ef81e20c8-tigera-ca-bundle\") pod \"calico-kube-controllers-847b98fc4d-cw68d\" (UID: \"65680b46-920b-40e7-93fd-698ef81e20c8\") " pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" Nov 1 00:27:13.926720 kubelet[2713]: I1101 00:27:13.926386 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct9s2\" (UniqueName: \"kubernetes.io/projected/65680b46-920b-40e7-93fd-698ef81e20c8-kube-api-access-ct9s2\") pod \"calico-kube-controllers-847b98fc4d-cw68d\" (UID: \"65680b46-920b-40e7-93fd-698ef81e20c8\") " pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" Nov 1 00:27:13.990700 kubelet[2713]: E1101 00:27:13.989510 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:13.994182 containerd[1592]: time="2025-11-01T00:27:13.994132645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:27:14.022075 systemd-journald[1151]: Under memory pressure, flushing caches. Nov 1 00:27:14.019603 systemd-resolved[1476]: Under memory pressure, flushing caches. Nov 1 00:27:14.019658 systemd-resolved[1476]: Flushed all caches. Nov 1 00:27:14.086327 containerd[1592]: time="2025-11-01T00:27:14.086253790Z" level=error msg="Failed to destroy network for sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.089831 containerd[1592]: time="2025-11-01T00:27:14.089785440Z" level=error msg="encountered an error cleaning up failed sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.089890 containerd[1592]: time="2025-11-01T00:27:14.089846745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9nqw,Uid:675112ea-20ac-4b20-b92c-b74dc58b95cd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.099985 kubelet[2713]: E1101 00:27:14.099936 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.100070 kubelet[2713]: E1101 00:27:14.100015 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9nqw" Nov 1 00:27:14.100070 kubelet[2713]: E1101 00:27:14.100063 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9nqw" Nov 1 00:27:14.100137 kubelet[2713]: E1101 00:27:14.100117 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9nqw_calico-system(675112ea-20ac-4b20-b92c-b74dc58b95cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9nqw_calico-system(675112ea-20ac-4b20-b92c-b74dc58b95cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:14.116877 kubelet[2713]: E1101 00:27:14.116839 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:14.117713 containerd[1592]: time="2025-11-01T00:27:14.117556520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kjzn2,Uid:e3bbd1b7-2cec-41ab-97aa-54499c93466d,Namespace:kube-system,Attempt:0,}" Nov 1 00:27:14.119527 containerd[1592]: time="2025-11-01T00:27:14.119478978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fc45bf6-gxk8k,Uid:8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:27:14.125612 containerd[1592]: time="2025-11-01T00:27:14.125564221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847b98fc4d-cw68d,Uid:65680b46-920b-40e7-93fd-698ef81e20c8,Namespace:calico-system,Attempt:0,}" Nov 1 00:27:14.130872 containerd[1592]: time="2025-11-01T00:27:14.130779320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f7f7788f-4djwn,Uid:57074b9c-532a-474a-a4c5-559d3798e3ac,Namespace:calico-system,Attempt:0,}" Nov 1 00:27:14.132169 kubelet[2713]: E1101 00:27:14.132134 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:14.132932 containerd[1592]: time="2025-11-01T00:27:14.132866338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mfl96,Uid:bd83f5e8-3d82-42fe-a0b0-5807c8a2598f,Namespace:kube-system,Attempt:0,}" Nov 1 00:27:14.135409 containerd[1592]: time="2025-11-01T00:27:14.135377342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ksf9n,Uid:41180a49-a14f-492f-9746-dfd093b11440,Namespace:calico-system,Attempt:0,}" Nov 1 00:27:14.139457 containerd[1592]: time="2025-11-01T00:27:14.139204055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc976b46c-8vsw4,Uid:4d8b0a34-66bd-4c22-a438-b5e5354489a4,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:27:14.139457 containerd[1592]: time="2025-11-01T00:27:14.139241465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fc45bf6-dz8ms,Uid:a7329f0c-4569-4192-ab57-1ba0d9bc5c3f,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:27:14.251106 containerd[1592]: time="2025-11-01T00:27:14.250411586Z" level=error msg="Failed to destroy network for sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.251106 containerd[1592]: time="2025-11-01T00:27:14.250951511Z" level=error msg="encountered an error cleaning up failed sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.251106 containerd[1592]: time="2025-11-01T00:27:14.251007105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kjzn2,Uid:e3bbd1b7-2cec-41ab-97aa-54499c93466d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.251996 kubelet[2713]: E1101 00:27:14.251338 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.251996 kubelet[2713]: E1101 00:27:14.251423 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kjzn2" Nov 1 00:27:14.251996 kubelet[2713]: E1101 00:27:14.251457 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kjzn2" Nov 1 00:27:14.252355 kubelet[2713]: E1101 00:27:14.251516 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kjzn2_kube-system(e3bbd1b7-2cec-41ab-97aa-54499c93466d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kjzn2_kube-system(e3bbd1b7-2cec-41ab-97aa-54499c93466d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kjzn2" podUID="e3bbd1b7-2cec-41ab-97aa-54499c93466d" Nov 1 00:27:14.322702 containerd[1592]: time="2025-11-01T00:27:14.322513402Z" level=error msg="Failed to destroy network for sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.323215 containerd[1592]: time="2025-11-01T00:27:14.323188569Z" level=error msg="encountered an error cleaning up failed sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.324116 containerd[1592]: time="2025-11-01T00:27:14.324083349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847b98fc4d-cw68d,Uid:65680b46-920b-40e7-93fd-698ef81e20c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.324864 kubelet[2713]: E1101 00:27:14.324661 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.324864 kubelet[2713]: E1101 00:27:14.324740 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" Nov 1 00:27:14.324864 kubelet[2713]: E1101 00:27:14.324764 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" Nov 1 00:27:14.325300 kubelet[2713]: E1101 00:27:14.324823 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-847b98fc4d-cw68d_calico-system(65680b46-920b-40e7-93fd-698ef81e20c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-847b98fc4d-cw68d_calico-system(65680b46-920b-40e7-93fd-698ef81e20c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" podUID="65680b46-920b-40e7-93fd-698ef81e20c8" Nov 1 00:27:14.350569 containerd[1592]: time="2025-11-01T00:27:14.324451520Z" level=error msg="Failed to destroy network for sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.350569 containerd[1592]: time="2025-11-01T00:27:14.350079537Z" level=error msg="Failed to destroy network for sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.351376 containerd[1592]: time="2025-11-01T00:27:14.351342157Z" level=error msg="encountered an error cleaning up failed sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.351527 containerd[1592]: time="2025-11-01T00:27:14.351495605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ksf9n,Uid:41180a49-a14f-492f-9746-dfd093b11440,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.353274 kubelet[2713]: E1101 00:27:14.351897 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.353274 kubelet[2713]: E1101 00:27:14.352001 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ksf9n" Nov 1 00:27:14.353274 kubelet[2713]: E1101 00:27:14.352054 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ksf9n" Nov 1 00:27:14.353423 containerd[1592]: time="2025-11-01T00:27:14.353156513Z" level=error msg="encountered an error cleaning up failed sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.353423 containerd[1592]: time="2025-11-01T00:27:14.353230491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fc45bf6-gxk8k,Uid:8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.353519 kubelet[2713]: E1101 00:27:14.352109 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ksf9n_calico-system(41180a49-a14f-492f-9746-dfd093b11440)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ksf9n_calico-system(41180a49-a14f-492f-9746-dfd093b11440)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ksf9n" podUID="41180a49-a14f-492f-9746-dfd093b11440" Nov 1 00:27:14.354258 kubelet[2713]: E1101 00:27:14.353852 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.354258 kubelet[2713]: E1101 00:27:14.353994 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" Nov 1 00:27:14.354258 kubelet[2713]: E1101 00:27:14.354050 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" Nov 1 00:27:14.354530 kubelet[2713]: E1101 00:27:14.354147 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65fc45bf6-gxk8k_calico-apiserver(8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65fc45bf6-gxk8k_calico-apiserver(8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" podUID="8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8" Nov 1 00:27:14.370724 containerd[1592]: time="2025-11-01T00:27:14.370084139Z" level=error msg="Failed to destroy network for sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.370724 containerd[1592]: time="2025-11-01T00:27:14.370594387Z" level=error msg="encountered an error cleaning up failed sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.370724 containerd[1592]: time="2025-11-01T00:27:14.370662344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f7f7788f-4djwn,Uid:57074b9c-532a-474a-a4c5-559d3798e3ac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.372424 kubelet[2713]: E1101 00:27:14.372353 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.372509 kubelet[2713]: E1101 00:27:14.372438 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f7f7788f-4djwn" Nov 1 00:27:14.372509 kubelet[2713]: E1101 00:27:14.372463 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f7f7788f-4djwn" Nov 1 00:27:14.372588 kubelet[2713]: E1101 00:27:14.372518 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f7f7788f-4djwn_calico-system(57074b9c-532a-474a-a4c5-559d3798e3ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f7f7788f-4djwn_calico-system(57074b9c-532a-474a-a4c5-559d3798e3ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f7f7788f-4djwn" podUID="57074b9c-532a-474a-a4c5-559d3798e3ac" Nov 1 00:27:14.377119 containerd[1592]: time="2025-11-01T00:27:14.376992225Z" level=error msg="Failed to destroy network for sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.377436 containerd[1592]: time="2025-11-01T00:27:14.377400672Z" level=error msg="encountered an error cleaning up failed sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.377522 containerd[1592]: time="2025-11-01T00:27:14.377448632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc976b46c-8vsw4,Uid:4d8b0a34-66bd-4c22-a438-b5e5354489a4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.377713 kubelet[2713]: E1101 00:27:14.377666 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.377891 kubelet[2713]: E1101 00:27:14.377722 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" Nov 1 00:27:14.377891 kubelet[2713]: E1101 00:27:14.377749 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" Nov 1 00:27:14.377891 kubelet[2713]: E1101 00:27:14.377793 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc976b46c-8vsw4_calico-apiserver(4d8b0a34-66bd-4c22-a438-b5e5354489a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc976b46c-8vsw4_calico-apiserver(4d8b0a34-66bd-4c22-a438-b5e5354489a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" podUID="4d8b0a34-66bd-4c22-a438-b5e5354489a4" Nov 1 00:27:14.381327 containerd[1592]: time="2025-11-01T00:27:14.381300373Z" level=error msg="Failed to destroy network for sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.382098 containerd[1592]: time="2025-11-01T00:27:14.381627887Z" level=error msg="encountered an error cleaning up failed sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.382098 containerd[1592]: time="2025-11-01T00:27:14.381667732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fc45bf6-dz8ms,Uid:a7329f0c-4569-4192-ab57-1ba0d9bc5c3f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.382215 kubelet[2713]: E1101 00:27:14.381852 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.382215 kubelet[2713]: E1101 00:27:14.381925 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" Nov 1 00:27:14.382215 kubelet[2713]: E1101 00:27:14.381947 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" Nov 1 00:27:14.382287 kubelet[2713]: E1101 00:27:14.381991 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65fc45bf6-dz8ms_calico-apiserver(a7329f0c-4569-4192-ab57-1ba0d9bc5c3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65fc45bf6-dz8ms_calico-apiserver(a7329f0c-4569-4192-ab57-1ba0d9bc5c3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" podUID="a7329f0c-4569-4192-ab57-1ba0d9bc5c3f" Nov 1 00:27:14.383164 containerd[1592]: time="2025-11-01T00:27:14.382984254Z" level=error msg="Failed to destroy network for sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.383541 containerd[1592]: time="2025-11-01T00:27:14.383493860Z" level=error msg="encountered an error cleaning up failed sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.383541 containerd[1592]: time="2025-11-01T00:27:14.383541910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mfl96,Uid:bd83f5e8-3d82-42fe-a0b0-5807c8a2598f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.383753 kubelet[2713]: E1101 00:27:14.383692 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:14.383753 kubelet[2713]: E1101 00:27:14.383737 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mfl96" Nov 1 00:27:14.383833 kubelet[2713]: E1101 00:27:14.383759 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mfl96" Nov 1 00:27:14.383833 kubelet[2713]: E1101 00:27:14.383808 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mfl96_kube-system(bd83f5e8-3d82-42fe-a0b0-5807c8a2598f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mfl96_kube-system(bd83f5e8-3d82-42fe-a0b0-5807c8a2598f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mfl96" podUID="bd83f5e8-3d82-42fe-a0b0-5807c8a2598f" Nov 1 00:27:14.740058 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8-shm.mount: Deactivated successfully. Nov 1 00:27:14.993344 kubelet[2713]: I1101 00:27:14.993189 2713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:14.995680 kubelet[2713]: I1101 00:27:14.994564 2713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:14.996980 kubelet[2713]: I1101 00:27:14.996912 2713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:15.000227 containerd[1592]: time="2025-11-01T00:27:14.998323940Z" level=info msg="StopPodSandbox for \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\"" Nov 1 00:27:15.000227 containerd[1592]: time="2025-11-01T00:27:14.998417736Z" level=info msg="StopPodSandbox for \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\"" Nov 1 00:27:15.000835 kubelet[2713]: I1101 00:27:14.998347 2713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:15.001097 containerd[1592]: time="2025-11-01T00:27:15.001064174Z" level=info msg="StopPodSandbox for \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\"" Nov 1 00:27:15.001384 containerd[1592]: time="2025-11-01T00:27:15.001355992Z" level=info msg="StopPodSandbox for \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\"" Nov 1 00:27:15.002422 containerd[1592]: time="2025-11-01T00:27:15.002329289Z" level=info msg="Ensure that sandbox d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb in task-service has been cleanup successfully" Nov 1 00:27:15.002673 containerd[1592]: time="2025-11-01T00:27:15.002336693Z" level=info msg="Ensure that sandbox 5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8 in task-service has been cleanup successfully" Nov 1 00:27:15.005210 containerd[1592]: time="2025-11-01T00:27:15.002327726Z" level=info msg="Ensure that sandbox 6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f in task-service has been cleanup successfully" Nov 1 00:27:15.005298 kubelet[2713]: I1101 00:27:15.005060 2713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:15.007508 containerd[1592]: time="2025-11-01T00:27:15.007456021Z" level=info msg="StopPodSandbox for \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\"" Nov 1 00:27:15.007725 containerd[1592]: time="2025-11-01T00:27:15.007695300Z" level=info msg="Ensure that sandbox f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e in task-service has been cleanup successfully" Nov 1 00:27:15.009717 containerd[1592]: time="2025-11-01T00:27:15.009678212Z" level=info msg="Ensure that sandbox c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a in task-service has been cleanup successfully" Nov 1 00:27:15.013099 kubelet[2713]: I1101 00:27:15.013001 2713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:15.014142 containerd[1592]: time="2025-11-01T00:27:15.013986218Z" level=info msg="StopPodSandbox for \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\"" Nov 1 00:27:15.014462 containerd[1592]: time="2025-11-01T00:27:15.014442936Z" level=info msg="Ensure that sandbox 90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7 in task-service has been cleanup successfully" Nov 1 00:27:15.017949 kubelet[2713]: I1101 00:27:15.017904 2713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:15.018921 containerd[1592]: time="2025-11-01T00:27:15.018839969Z" level=info msg="StopPodSandbox for \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\"" Nov 1 00:27:15.019185 containerd[1592]: time="2025-11-01T00:27:15.019127098Z" level=info msg="Ensure that sandbox 5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010 in task-service has been cleanup successfully" Nov 1 00:27:15.021911 kubelet[2713]: I1101 00:27:15.021812 2713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:15.024579 containerd[1592]: time="2025-11-01T00:27:15.023782386Z" level=info msg="StopPodSandbox for \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\"" Nov 1 00:27:15.024579 containerd[1592]: time="2025-11-01T00:27:15.024008280Z" level=info msg="Ensure that sandbox e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9 in task-service has been cleanup successfully" Nov 1 00:27:15.026109 kubelet[2713]: I1101 00:27:15.025823 2713 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:15.031410 containerd[1592]: time="2025-11-01T00:27:15.031341925Z" level=info msg="StopPodSandbox for \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\"" Nov 1 00:27:15.031828 containerd[1592]: time="2025-11-01T00:27:15.031797761Z" level=info msg="Ensure that sandbox 520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a in task-service has been cleanup successfully" Nov 1 00:27:15.087610 containerd[1592]: time="2025-11-01T00:27:15.085847562Z" level=error msg="StopPodSandbox for \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\" failed" error="failed to destroy network for sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:15.088729 kubelet[2713]: E1101 00:27:15.087939 2713 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:15.088729 kubelet[2713]: E1101 00:27:15.088159 2713 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8"} Nov 1 00:27:15.088729 kubelet[2713]: E1101 00:27:15.088269 2713 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"675112ea-20ac-4b20-b92c-b74dc58b95cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:27:15.088729 kubelet[2713]: E1101 00:27:15.088343 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"675112ea-20ac-4b20-b92c-b74dc58b95cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:15.098556 containerd[1592]: time="2025-11-01T00:27:15.098458523Z" level=error msg="StopPodSandbox for \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\" failed" error="failed to destroy network for sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:15.098935 kubelet[2713]: E1101 00:27:15.098859 2713 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:15.099011 kubelet[2713]: E1101 00:27:15.098953 2713 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a"} Nov 1 00:27:15.099097 kubelet[2713]: E1101 00:27:15.099009 2713 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:27:15.099097 kubelet[2713]: E1101 00:27:15.099078 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" podUID="8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8" Nov 1 00:27:15.106781 containerd[1592]: time="2025-11-01T00:27:15.106724969Z" level=error msg="StopPodSandbox for \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\" failed" error="failed to destroy network for sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:15.107378 kubelet[2713]: E1101 00:27:15.107194 2713 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:15.107378 kubelet[2713]: E1101 00:27:15.107251 2713 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb"} Nov 1 00:27:15.107378 kubelet[2713]: E1101 00:27:15.107288 2713 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d8b0a34-66bd-4c22-a438-b5e5354489a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:27:15.107378 kubelet[2713]: E1101 00:27:15.107321 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d8b0a34-66bd-4c22-a438-b5e5354489a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" podUID="4d8b0a34-66bd-4c22-a438-b5e5354489a4" Nov 1 00:27:15.122490 containerd[1592]: time="2025-11-01T00:27:15.122412825Z" level=error msg="StopPodSandbox for \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\" failed" error="failed to destroy network for sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:15.122727 kubelet[2713]: E1101 00:27:15.122683 2713 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:15.122821 kubelet[2713]: E1101 00:27:15.122744 2713 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010"} Nov 1 00:27:15.122821 kubelet[2713]: E1101 00:27:15.122782 2713 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd83f5e8-3d82-42fe-a0b0-5807c8a2598f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:27:15.122821 kubelet[2713]: E1101 00:27:15.122812 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd83f5e8-3d82-42fe-a0b0-5807c8a2598f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mfl96" podUID="bd83f5e8-3d82-42fe-a0b0-5807c8a2598f" Nov 1 00:27:15.123653 containerd[1592]: time="2025-11-01T00:27:15.123612808Z" level=error msg="StopPodSandbox for \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\" failed" error="failed to destroy network for sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:15.123766 kubelet[2713]: E1101 00:27:15.123741 2713 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:15.123828 kubelet[2713]: E1101 00:27:15.123771 2713 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e"} Nov 1 00:27:15.123828 kubelet[2713]: E1101 00:27:15.123794 2713 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7329f0c-4569-4192-ab57-1ba0d9bc5c3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:27:15.123927 kubelet[2713]: E1101 00:27:15.123811 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7329f0c-4569-4192-ab57-1ba0d9bc5c3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" podUID="a7329f0c-4569-4192-ab57-1ba0d9bc5c3f" Nov 1 00:27:15.125755 containerd[1592]: time="2025-11-01T00:27:15.125687492Z" level=error msg="StopPodSandbox for \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\" failed" error="failed to destroy network for sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:15.125952 containerd[1592]: time="2025-11-01T00:27:15.125898047Z" level=error msg="StopPodSandbox for \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\" failed" error="failed to destroy network for sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:15.126247 kubelet[2713]: E1101 00:27:15.126002 2713 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:15.126247 kubelet[2713]: E1101 00:27:15.126091 2713 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:15.126247 kubelet[2713]: E1101 00:27:15.126105 2713 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7"} Nov 1 00:27:15.126247 kubelet[2713]: E1101 00:27:15.126126 2713 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f"} Nov 1 00:27:15.126247 kubelet[2713]: E1101 00:27:15.126153 2713 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3bbd1b7-2cec-41ab-97aa-54499c93466d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:27:15.126419 kubelet[2713]: E1101 00:27:15.126158 2713 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41180a49-a14f-492f-9746-dfd093b11440\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:27:15.126419 kubelet[2713]: E1101 00:27:15.126182 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3bbd1b7-2cec-41ab-97aa-54499c93466d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kjzn2" podUID="e3bbd1b7-2cec-41ab-97aa-54499c93466d" Nov 1 00:27:15.126419 kubelet[2713]: E1101 00:27:15.126193 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41180a49-a14f-492f-9746-dfd093b11440\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ksf9n" podUID="41180a49-a14f-492f-9746-dfd093b11440" Nov 1 00:27:15.130046 containerd[1592]: time="2025-11-01T00:27:15.129635062Z" level=error msg="StopPodSandbox for \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\" failed" error="failed to destroy network for sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:15.130117 kubelet[2713]: E1101 00:27:15.129816 2713 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:15.130117 kubelet[2713]: E1101 00:27:15.129857 2713 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a"} Nov 1 00:27:15.130117 kubelet[2713]: E1101 00:27:15.129895 2713 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65680b46-920b-40e7-93fd-698ef81e20c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:27:15.130117 kubelet[2713]: E1101 00:27:15.129920 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65680b46-920b-40e7-93fd-698ef81e20c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" podUID="65680b46-920b-40e7-93fd-698ef81e20c8" Nov 1 00:27:15.131184 containerd[1592]: time="2025-11-01T00:27:15.131155145Z" level=error msg="StopPodSandbox for \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\" failed" error="failed to destroy network for sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:27:15.131339 kubelet[2713]: E1101 00:27:15.131308 2713 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:15.131386 kubelet[2713]: E1101 00:27:15.131341 2713 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9"} Nov 1 00:27:15.131386 kubelet[2713]: E1101 00:27:15.131362 2713 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57074b9c-532a-474a-a4c5-559d3798e3ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:27:15.131467 kubelet[2713]: E1101 00:27:15.131387 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57074b9c-532a-474a-a4c5-559d3798e3ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f7f7788f-4djwn" podUID="57074b9c-532a-474a-a4c5-559d3798e3ac" Nov 1 00:27:16.066183 systemd-resolved[1476]: Under memory pressure, flushing caches. Nov 1 00:27:16.066217 systemd-resolved[1476]: Flushed all caches. Nov 1 00:27:16.069051 systemd-journald[1151]: Under memory pressure, flushing caches. Nov 1 00:27:18.335623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664266002.mount: Deactivated successfully. Nov 1 00:27:20.907891 containerd[1592]: time="2025-11-01T00:27:20.907819522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:20.911250 containerd[1592]: time="2025-11-01T00:27:20.911185779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:27:20.914901 containerd[1592]: time="2025-11-01T00:27:20.914825158Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:20.918944 containerd[1592]: time="2025-11-01T00:27:20.918903040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:27:20.919532 containerd[1592]: time="2025-11-01T00:27:20.919474362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.925296853s" Nov 1 00:27:20.919532 containerd[1592]: time="2025-11-01T00:27:20.919510199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:27:20.936520 containerd[1592]: time="2025-11-01T00:27:20.936417225Z" level=info msg="CreateContainer within sandbox \"63d3d1a173c1f1f7819527291d2dc4797f0d744f303a72102bc0f522f9fc4d6d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:27:20.963465 containerd[1592]: time="2025-11-01T00:27:20.962437680Z" level=info msg="CreateContainer within sandbox \"63d3d1a173c1f1f7819527291d2dc4797f0d744f303a72102bc0f522f9fc4d6d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"aa38169c908b706fe40f95dd7383695f382399f9215a2415a3b41f5907f52073\"" Nov 1 00:27:20.963465 containerd[1592]: time="2025-11-01T00:27:20.963014443Z" level=info msg="StartContainer for \"aa38169c908b706fe40f95dd7383695f382399f9215a2415a3b41f5907f52073\"" Nov 1 00:27:21.090901 containerd[1592]: time="2025-11-01T00:27:21.090851475Z" level=info msg="StartContainer for \"aa38169c908b706fe40f95dd7383695f382399f9215a2415a3b41f5907f52073\" returns successfully" Nov 1 00:27:21.195856 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:27:21.196057 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:27:21.232525 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:49508.service - OpenSSH per-connection server daemon (10.0.0.1:49508). Nov 1 00:27:21.313976 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 49508 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:21.318959 sshd[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:21.330811 containerd[1592]: time="2025-11-01T00:27:21.330305266Z" level=info msg="StopPodSandbox for \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\"" Nov 1 00:27:21.339133 systemd-logind[1577]: New session 8 of user core. Nov 1 00:27:21.342233 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:27:21.575220 sshd[4066]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:21.581134 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:27:21.581572 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:49508.service: Deactivated successfully. Nov 1 00:27:21.586431 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:27:21.590518 systemd-logind[1577]: Removed session 8. Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.466 [INFO][4088] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.467 [INFO][4088] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" iface="eth0" netns="/var/run/netns/cni-236ea55e-a3b5-cbc3-b9b2-53aedf56d4ed" Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.467 [INFO][4088] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" iface="eth0" netns="/var/run/netns/cni-236ea55e-a3b5-cbc3-b9b2-53aedf56d4ed" Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.468 [INFO][4088] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" iface="eth0" netns="/var/run/netns/cni-236ea55e-a3b5-cbc3-b9b2-53aedf56d4ed" Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.468 [INFO][4088] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.468 [INFO][4088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.573 [INFO][4107] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" HandleID="k8s-pod-network.e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Workload="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.574 [INFO][4107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.574 [INFO][4107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.587 [WARNING][4107] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" HandleID="k8s-pod-network.e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Workload="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.587 [INFO][4107] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" HandleID="k8s-pod-network.e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Workload="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.589 [INFO][4107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:21.597941 containerd[1592]: 2025-11-01 00:27:21.594 [INFO][4088] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:21.598385 containerd[1592]: time="2025-11-01T00:27:21.598165488Z" level=info msg="TearDown network for sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\" successfully" Nov 1 00:27:21.598385 containerd[1592]: time="2025-11-01T00:27:21.598198280Z" level=info msg="StopPodSandbox for \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\" returns successfully" Nov 1 00:27:21.682577 kubelet[2713]: I1101 00:27:21.682499 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57074b9c-532a-474a-a4c5-559d3798e3ac-whisker-ca-bundle\") pod \"57074b9c-532a-474a-a4c5-559d3798e3ac\" (UID: \"57074b9c-532a-474a-a4c5-559d3798e3ac\") " Nov 1 00:27:21.682577 kubelet[2713]: I1101 00:27:21.682575 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72sqr\" (UniqueName: \"kubernetes.io/projected/57074b9c-532a-474a-a4c5-559d3798e3ac-kube-api-access-72sqr\") pod \"57074b9c-532a-474a-a4c5-559d3798e3ac\" (UID: \"57074b9c-532a-474a-a4c5-559d3798e3ac\") " Nov 1 00:27:21.683239 kubelet[2713]: I1101 00:27:21.682612 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57074b9c-532a-474a-a4c5-559d3798e3ac-whisker-backend-key-pair\") pod \"57074b9c-532a-474a-a4c5-559d3798e3ac\" (UID: \"57074b9c-532a-474a-a4c5-559d3798e3ac\") " Nov 1 00:27:21.683239 kubelet[2713]: I1101 00:27:21.683154 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57074b9c-532a-474a-a4c5-559d3798e3ac-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "57074b9c-532a-474a-a4c5-559d3798e3ac" (UID: "57074b9c-532a-474a-a4c5-559d3798e3ac"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:27:21.686299 kubelet[2713]: I1101 00:27:21.686264 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57074b9c-532a-474a-a4c5-559d3798e3ac-kube-api-access-72sqr" (OuterVolumeSpecName: "kube-api-access-72sqr") pod "57074b9c-532a-474a-a4c5-559d3798e3ac" (UID: "57074b9c-532a-474a-a4c5-559d3798e3ac"). InnerVolumeSpecName "kube-api-access-72sqr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:27:21.686616 kubelet[2713]: I1101 00:27:21.686572 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57074b9c-532a-474a-a4c5-559d3798e3ac-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "57074b9c-532a-474a-a4c5-559d3798e3ac" (UID: "57074b9c-532a-474a-a4c5-559d3798e3ac"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:27:21.783613 kubelet[2713]: I1101 00:27:21.783543 2713 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57074b9c-532a-474a-a4c5-559d3798e3ac-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 00:27:21.783613 kubelet[2713]: I1101 00:27:21.783593 2713 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-72sqr\" (UniqueName: \"kubernetes.io/projected/57074b9c-532a-474a-a4c5-559d3798e3ac-kube-api-access-72sqr\") on node \"localhost\" DevicePath \"\"" Nov 1 00:27:21.783613 kubelet[2713]: I1101 00:27:21.783608 2713 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57074b9c-532a-474a-a4c5-559d3798e3ac-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 00:27:21.931502 systemd[1]: run-netns-cni\x2d236ea55e\x2da3b5\x2dcbc3\x2db9b2\x2d53aedf56d4ed.mount: Deactivated successfully. Nov 1 00:27:21.931720 systemd[1]: var-lib-kubelet-pods-57074b9c\x2d532a\x2d474a\x2da4c5\x2d559d3798e3ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d72sqr.mount: Deactivated successfully. Nov 1 00:27:21.931883 systemd[1]: var-lib-kubelet-pods-57074b9c\x2d532a\x2d474a\x2da4c5\x2d559d3798e3ac-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:27:22.053340 kubelet[2713]: E1101 00:27:22.053292 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:22.180950 kubelet[2713]: I1101 00:27:22.180878 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rxll9" podStartSLOduration=2.456903563 podStartE2EDuration="21.180858424s" podCreationTimestamp="2025-11-01 00:27:01 +0000 UTC" firstStartedPulling="2025-11-01 00:27:02.200270236 +0000 UTC m=+21.389990745" lastFinishedPulling="2025-11-01 00:27:20.924225097 +0000 UTC m=+40.113945606" observedRunningTime="2025-11-01 00:27:22.180354378 +0000 UTC m=+41.370074897" watchObservedRunningTime="2025-11-01 00:27:22.180858424 +0000 UTC m=+41.370578933" Nov 1 00:27:22.288005 kubelet[2713]: I1101 00:27:22.287862 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38e58461-b45b-46ed-b68d-38eb9fdd6911-whisker-ca-bundle\") pod \"whisker-67d8fdc769-sqbcf\" (UID: \"38e58461-b45b-46ed-b68d-38eb9fdd6911\") " pod="calico-system/whisker-67d8fdc769-sqbcf" Nov 1 00:27:22.288005 kubelet[2713]: I1101 00:27:22.287900 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38e58461-b45b-46ed-b68d-38eb9fdd6911-whisker-backend-key-pair\") pod \"whisker-67d8fdc769-sqbcf\" (UID: \"38e58461-b45b-46ed-b68d-38eb9fdd6911\") " pod="calico-system/whisker-67d8fdc769-sqbcf" Nov 1 00:27:22.288005 kubelet[2713]: I1101 00:27:22.287926 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rghb5\" (UniqueName: \"kubernetes.io/projected/38e58461-b45b-46ed-b68d-38eb9fdd6911-kube-api-access-rghb5\") pod \"whisker-67d8fdc769-sqbcf\" (UID: \"38e58461-b45b-46ed-b68d-38eb9fdd6911\") " pod="calico-system/whisker-67d8fdc769-sqbcf" Nov 1 00:27:22.554833 containerd[1592]: time="2025-11-01T00:27:22.554674436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d8fdc769-sqbcf,Uid:38e58461-b45b-46ed-b68d-38eb9fdd6911,Namespace:calico-system,Attempt:0,}" Nov 1 00:27:22.790140 systemd-networkd[1249]: cali7d2360a107d: Link UP Nov 1 00:27:22.790444 systemd-networkd[1249]: cali7d2360a107d: Gained carrier Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.635 [INFO][4150] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.650 [INFO][4150] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--67d8fdc769--sqbcf-eth0 whisker-67d8fdc769- calico-system 38e58461-b45b-46ed-b68d-38eb9fdd6911 1013 0 2025-11-01 00:27:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67d8fdc769 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-67d8fdc769-sqbcf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7d2360a107d [] [] }} ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Namespace="calico-system" Pod="whisker-67d8fdc769-sqbcf" WorkloadEndpoint="localhost-k8s-whisker--67d8fdc769--sqbcf-" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.650 [INFO][4150] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Namespace="calico-system" Pod="whisker-67d8fdc769-sqbcf" WorkloadEndpoint="localhost-k8s-whisker--67d8fdc769--sqbcf-eth0" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.717 [INFO][4251] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" HandleID="k8s-pod-network.3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Workload="localhost-k8s-whisker--67d8fdc769--sqbcf-eth0" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.718 [INFO][4251] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" HandleID="k8s-pod-network.3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Workload="localhost-k8s-whisker--67d8fdc769--sqbcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001196e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-67d8fdc769-sqbcf", "timestamp":"2025-11-01 00:27:22.717340271 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.718 [INFO][4251] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.718 [INFO][4251] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.718 [INFO][4251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.731 [INFO][4251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" host="localhost" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.741 [INFO][4251] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.747 [INFO][4251] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.751 [INFO][4251] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.757 [INFO][4251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.757 [INFO][4251] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" host="localhost" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.759 [INFO][4251] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619 Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.764 [INFO][4251] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" host="localhost" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.773 [INFO][4251] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" host="localhost" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.773 [INFO][4251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" host="localhost" Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.773 [INFO][4251] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:22.814096 containerd[1592]: 2025-11-01 00:27:22.773 [INFO][4251] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" HandleID="k8s-pod-network.3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Workload="localhost-k8s-whisker--67d8fdc769--sqbcf-eth0" Nov 1 00:27:22.815087 containerd[1592]: 2025-11-01 00:27:22.777 [INFO][4150] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Namespace="calico-system" Pod="whisker-67d8fdc769-sqbcf" WorkloadEndpoint="localhost-k8s-whisker--67d8fdc769--sqbcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67d8fdc769--sqbcf-eth0", GenerateName:"whisker-67d8fdc769-", Namespace:"calico-system", SelfLink:"", UID:"38e58461-b45b-46ed-b68d-38eb9fdd6911", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 27, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67d8fdc769", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-67d8fdc769-sqbcf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7d2360a107d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:22.815087 containerd[1592]: 2025-11-01 00:27:22.777 [INFO][4150] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Namespace="calico-system" Pod="whisker-67d8fdc769-sqbcf" WorkloadEndpoint="localhost-k8s-whisker--67d8fdc769--sqbcf-eth0" Nov 1 00:27:22.815087 containerd[1592]: 2025-11-01 00:27:22.777 [INFO][4150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d2360a107d ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Namespace="calico-system" Pod="whisker-67d8fdc769-sqbcf" WorkloadEndpoint="localhost-k8s-whisker--67d8fdc769--sqbcf-eth0" Nov 1 00:27:22.815087 containerd[1592]: 2025-11-01 00:27:22.790 [INFO][4150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Namespace="calico-system" Pod="whisker-67d8fdc769-sqbcf" WorkloadEndpoint="localhost-k8s-whisker--67d8fdc769--sqbcf-eth0" Nov 1 00:27:22.815087 containerd[1592]: 2025-11-01 00:27:22.791 [INFO][4150] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Namespace="calico-system" Pod="whisker-67d8fdc769-sqbcf" WorkloadEndpoint="localhost-k8s-whisker--67d8fdc769--sqbcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67d8fdc769--sqbcf-eth0", GenerateName:"whisker-67d8fdc769-", Namespace:"calico-system", SelfLink:"", UID:"38e58461-b45b-46ed-b68d-38eb9fdd6911", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 27, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67d8fdc769", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619", Pod:"whisker-67d8fdc769-sqbcf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7d2360a107d", MAC:"fa:fa:66:07:15:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:22.815087 containerd[1592]: 2025-11-01 00:27:22.805 [INFO][4150] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619" Namespace="calico-system" Pod="whisker-67d8fdc769-sqbcf" WorkloadEndpoint="localhost-k8s-whisker--67d8fdc769--sqbcf-eth0" Nov 1 00:27:22.904949 containerd[1592]: time="2025-11-01T00:27:22.899724744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:22.904949 containerd[1592]: time="2025-11-01T00:27:22.899816336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:22.904949 containerd[1592]: time="2025-11-01T00:27:22.899832596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:22.904949 containerd[1592]: time="2025-11-01T00:27:22.900179398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:22.908797 kubelet[2713]: I1101 00:27:22.908755 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57074b9c-532a-474a-a4c5-559d3798e3ac" path="/var/lib/kubelet/pods/57074b9c-532a-474a-a4c5-559d3798e3ac/volumes" Nov 1 00:27:22.934329 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:27:22.968670 containerd[1592]: time="2025-11-01T00:27:22.968612226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d8fdc769-sqbcf,Uid:38e58461-b45b-46ed-b68d-38eb9fdd6911,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e775c82eeec748f5fe07ff7bd84c6a76a54edc03877ba0839ea1fd0c79af619\"" Nov 1 00:27:22.970483 containerd[1592]: time="2025-11-01T00:27:22.970447108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:27:23.133925 kubelet[2713]: I1101 00:27:23.133734 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:27:23.134888 kubelet[2713]: E1101 00:27:23.134452 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:23.275290 containerd[1592]: time="2025-11-01T00:27:23.275242156Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:23.342070 containerd[1592]: time="2025-11-01T00:27:23.341978878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:27:23.357352 containerd[1592]: time="2025-11-01T00:27:23.342010817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:27:23.357616 kubelet[2713]: E1101 00:27:23.357562 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:27:23.357773 kubelet[2713]: E1101 00:27:23.357641 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:27:23.358939 kubelet[2713]: E1101 00:27:23.358892 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2e2a5e81cfcc4cb2aa655a270487e254,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rghb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67d8fdc769-sqbcf_calico-system(38e58461-b45b-46ed-b68d-38eb9fdd6911): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:23.361161 containerd[1592]: time="2025-11-01T00:27:23.361132143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:27:23.680116 containerd[1592]: time="2025-11-01T00:27:23.679979433Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:23.682731 containerd[1592]: time="2025-11-01T00:27:23.682632620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:27:23.682731 containerd[1592]: time="2025-11-01T00:27:23.682708853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:27:23.683078 kubelet[2713]: E1101 00:27:23.682978 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:27:23.683139 kubelet[2713]: E1101 00:27:23.683087 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:27:23.683392 kubelet[2713]: E1101 00:27:23.683321 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rghb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67d8fdc769-sqbcf_calico-system(38e58461-b45b-46ed-b68d-38eb9fdd6911): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:23.684602 kubelet[2713]: E1101 00:27:23.684545 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67d8fdc769-sqbcf" podUID="38e58461-b45b-46ed-b68d-38eb9fdd6911" Nov 1 00:27:23.938560 systemd-networkd[1249]: cali7d2360a107d: Gained IPv6LL Nov 1 00:27:24.059958 kubelet[2713]: E1101 00:27:24.059906 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:24.062816 kubelet[2713]: E1101 00:27:24.062762 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67d8fdc769-sqbcf" podUID="38e58461-b45b-46ed-b68d-38eb9fdd6911" Nov 1 00:27:24.122149 kernel: bpftool[4389]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:27:24.398130 systemd-networkd[1249]: vxlan.calico: Link UP Nov 1 00:27:24.398145 systemd-networkd[1249]: vxlan.calico: Gained carrier Nov 1 00:27:25.896357 containerd[1592]: time="2025-11-01T00:27:25.895962454Z" level=info msg="StopPodSandbox for \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\"" Nov 1 00:27:25.896357 containerd[1592]: time="2025-11-01T00:27:25.895962604Z" level=info msg="StopPodSandbox for \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\"" Nov 1 00:27:25.986191 systemd-networkd[1249]: vxlan.calico: Gained IPv6LL Nov 1 00:27:26.168471 kubelet[2713]: I1101 00:27:26.168331 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:27:26.169078 kubelet[2713]: E1101 00:27:26.168874 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.297 [INFO][4492] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.297 [INFO][4492] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" iface="eth0" netns="/var/run/netns/cni-d8b2ae24-6a6b-01c2-4841-575f6d0df8d5" Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.298 [INFO][4492] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" iface="eth0" netns="/var/run/netns/cni-d8b2ae24-6a6b-01c2-4841-575f6d0df8d5" Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.298 [INFO][4492] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" iface="eth0" netns="/var/run/netns/cni-d8b2ae24-6a6b-01c2-4841-575f6d0df8d5" Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.298 [INFO][4492] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.298 [INFO][4492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.339 [INFO][4528] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" HandleID="k8s-pod-network.5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.340 [INFO][4528] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.340 [INFO][4528] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.352 [WARNING][4528] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" HandleID="k8s-pod-network.5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.352 [INFO][4528] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" HandleID="k8s-pod-network.5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.355 [INFO][4528] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:26.364665 containerd[1592]: 2025-11-01 00:27:26.357 [INFO][4492] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:26.366499 containerd[1592]: time="2025-11-01T00:27:26.366429410Z" level=info msg="TearDown network for sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\" successfully" Nov 1 00:27:26.366499 containerd[1592]: time="2025-11-01T00:27:26.366485385Z" level=info msg="StopPodSandbox for \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\" returns successfully" Nov 1 00:27:26.368353 containerd[1592]: time="2025-11-01T00:27:26.368310348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9nqw,Uid:675112ea-20ac-4b20-b92c-b74dc58b95cd,Namespace:calico-system,Attempt:1,}" Nov 1 00:27:26.369437 systemd[1]: run-netns-cni\x2dd8b2ae24\x2d6a6b\x2d01c2\x2d4841\x2d575f6d0df8d5.mount: Deactivated successfully. Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.302 [INFO][4491] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.302 [INFO][4491] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" iface="eth0" netns="/var/run/netns/cni-a6af3762-5225-76de-56c7-0b4ea0b8cd95" Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.302 [INFO][4491] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" iface="eth0" netns="/var/run/netns/cni-a6af3762-5225-76de-56c7-0b4ea0b8cd95" Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.303 [INFO][4491] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" iface="eth0" netns="/var/run/netns/cni-a6af3762-5225-76de-56c7-0b4ea0b8cd95" Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.303 [INFO][4491] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.303 [INFO][4491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.343 [INFO][4530] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" HandleID="k8s-pod-network.d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.343 [INFO][4530] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.355 [INFO][4530] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.361 [WARNING][4530] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" HandleID="k8s-pod-network.d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.361 [INFO][4530] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" HandleID="k8s-pod-network.d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.363 [INFO][4530] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:26.373372 containerd[1592]: 2025-11-01 00:27:26.368 [INFO][4491] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:26.376218 containerd[1592]: time="2025-11-01T00:27:26.376179400Z" level=info msg="TearDown network for sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\" successfully" Nov 1 00:27:26.376218 containerd[1592]: time="2025-11-01T00:27:26.376212042Z" level=info msg="StopPodSandbox for \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\" returns successfully" Nov 1 00:27:26.376638 systemd[1]: run-netns-cni\x2da6af3762\x2d5225\x2d76de\x2d56c7\x2d0b4ea0b8cd95.mount: Deactivated successfully. Nov 1 00:27:26.376930 containerd[1592]: time="2025-11-01T00:27:26.376895945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc976b46c-8vsw4,Uid:4d8b0a34-66bd-4c22-a438-b5e5354489a4,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:27:26.584643 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:46260.service - OpenSSH per-connection server daemon (10.0.0.1:46260). Nov 1 00:27:26.590213 systemd-networkd[1249]: cali029ae4fc21b: Link UP Nov 1 00:27:26.591166 systemd-networkd[1249]: cali029ae4fc21b: Gained carrier Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.458 [INFO][4579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0 calico-apiserver-5fc976b46c- calico-apiserver 4d8b0a34-66bd-4c22-a438-b5e5354489a4 1066 0 2025-11-01 00:26:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fc976b46c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fc976b46c-8vsw4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali029ae4fc21b [] [] }} ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Namespace="calico-apiserver" Pod="calico-apiserver-5fc976b46c-8vsw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.458 [INFO][4579] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Namespace="calico-apiserver" Pod="calico-apiserver-5fc976b46c-8vsw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.497 [INFO][4600] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" HandleID="k8s-pod-network.0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.497 [INFO][4600] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" HandleID="k8s-pod-network.0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f5c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fc976b46c-8vsw4", "timestamp":"2025-11-01 00:27:26.497594864 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.497 [INFO][4600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.497 [INFO][4600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.497 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.552 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" host="localhost" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.558 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.563 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.565 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.567 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.567 [INFO][4600] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" host="localhost" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.569 [INFO][4600] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54 Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.573 [INFO][4600] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" host="localhost" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.579 [INFO][4600] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" host="localhost" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.579 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" host="localhost" Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.579 [INFO][4600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:26.609926 containerd[1592]: 2025-11-01 00:27:26.579 [INFO][4600] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" HandleID="k8s-pod-network.0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:26.610837 containerd[1592]: 2025-11-01 00:27:26.586 [INFO][4579] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Namespace="calico-apiserver" Pod="calico-apiserver-5fc976b46c-8vsw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0", GenerateName:"calico-apiserver-5fc976b46c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d8b0a34-66bd-4c22-a438-b5e5354489a4", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc976b46c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fc976b46c-8vsw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali029ae4fc21b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:26.610837 containerd[1592]: 2025-11-01 00:27:26.587 [INFO][4579] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Namespace="calico-apiserver" Pod="calico-apiserver-5fc976b46c-8vsw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:26.610837 containerd[1592]: 2025-11-01 00:27:26.587 [INFO][4579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali029ae4fc21b ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Namespace="calico-apiserver" Pod="calico-apiserver-5fc976b46c-8vsw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:26.610837 containerd[1592]: 2025-11-01 00:27:26.590 [INFO][4579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Namespace="calico-apiserver" Pod="calico-apiserver-5fc976b46c-8vsw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:26.610837 containerd[1592]: 2025-11-01 00:27:26.591 [INFO][4579] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Namespace="calico-apiserver" Pod="calico-apiserver-5fc976b46c-8vsw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0", GenerateName:"calico-apiserver-5fc976b46c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d8b0a34-66bd-4c22-a438-b5e5354489a4", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc976b46c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54", Pod:"calico-apiserver-5fc976b46c-8vsw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali029ae4fc21b", MAC:"7a:41:3c:a8:e2:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:26.610837 containerd[1592]: 2025-11-01 00:27:26.603 [INFO][4579] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54" Namespace="calico-apiserver" Pod="calico-apiserver-5fc976b46c-8vsw4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:26.640577 containerd[1592]: time="2025-11-01T00:27:26.640480515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:26.640737 containerd[1592]: time="2025-11-01T00:27:26.640558482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:26.640737 containerd[1592]: time="2025-11-01T00:27:26.640578149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:26.640788 sshd[4611]: Accepted publickey for core from 10.0.0.1 port 46260 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:26.641218 containerd[1592]: time="2025-11-01T00:27:26.640758356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:26.643184 sshd[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:26.648343 systemd-logind[1577]: New session 9 of user core. Nov 1 00:27:26.654440 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:27:26.673379 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:27:26.691720 systemd-networkd[1249]: cali84ca4c83ae7: Link UP Nov 1 00:27:26.691930 systemd-networkd[1249]: cali84ca4c83ae7: Gained carrier Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.444 [INFO][4566] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--l9nqw-eth0 csi-node-driver- calico-system 675112ea-20ac-4b20-b92c-b74dc58b95cd 1065 0 2025-11-01 00:27:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-l9nqw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali84ca4c83ae7 [] [] }} ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Namespace="calico-system" Pod="csi-node-driver-l9nqw" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9nqw-" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.447 [INFO][4566] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Namespace="calico-system" Pod="csi-node-driver-l9nqw" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.525 [INFO][4594] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" HandleID="k8s-pod-network.a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.525 [INFO][4594] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" HandleID="k8s-pod-network.a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004bdbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-l9nqw", "timestamp":"2025-11-01 00:27:26.52524201 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.525 [INFO][4594] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.579 [INFO][4594] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.579 [INFO][4594] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.653 [INFO][4594] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" host="localhost" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.662 [INFO][4594] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.668 [INFO][4594] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.670 [INFO][4594] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.672 [INFO][4594] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.672 [INFO][4594] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" host="localhost" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.673 [INFO][4594] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48 Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.677 [INFO][4594] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" host="localhost" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.683 [INFO][4594] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" host="localhost" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.683 [INFO][4594] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" host="localhost" Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.683 [INFO][4594] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:26.706945 containerd[1592]: 2025-11-01 00:27:26.683 [INFO][4594] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" HandleID="k8s-pod-network.a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:26.707601 containerd[1592]: 2025-11-01 00:27:26.687 [INFO][4566] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Namespace="calico-system" Pod="csi-node-driver-l9nqw" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9nqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l9nqw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"675112ea-20ac-4b20-b92c-b74dc58b95cd", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 27, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-l9nqw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali84ca4c83ae7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:26.707601 containerd[1592]: 2025-11-01 00:27:26.687 [INFO][4566] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Namespace="calico-system" Pod="csi-node-driver-l9nqw" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:26.707601 containerd[1592]: 2025-11-01 00:27:26.688 [INFO][4566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84ca4c83ae7 ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Namespace="calico-system" Pod="csi-node-driver-l9nqw" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:26.707601 containerd[1592]: 2025-11-01 00:27:26.690 [INFO][4566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Namespace="calico-system" Pod="csi-node-driver-l9nqw" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:26.707601 containerd[1592]: 2025-11-01 00:27:26.690 [INFO][4566] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Namespace="calico-system" Pod="csi-node-driver-l9nqw" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9nqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l9nqw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"675112ea-20ac-4b20-b92c-b74dc58b95cd", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 27, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48", Pod:"csi-node-driver-l9nqw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali84ca4c83ae7", MAC:"96:7c:09:c1:df:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:26.707601 containerd[1592]: 2025-11-01 00:27:26.702 [INFO][4566] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48" Namespace="calico-system" Pod="csi-node-driver-l9nqw" WorkloadEndpoint="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:26.726009 containerd[1592]: time="2025-11-01T00:27:26.725624755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc976b46c-8vsw4,Uid:4d8b0a34-66bd-4c22-a438-b5e5354489a4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54\"" Nov 1 00:27:26.729437 containerd[1592]: time="2025-11-01T00:27:26.729251226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:26.739821 containerd[1592]: time="2025-11-01T00:27:26.739056544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:26.739821 containerd[1592]: time="2025-11-01T00:27:26.739787671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:26.739821 containerd[1592]: time="2025-11-01T00:27:26.739803420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:26.740304 containerd[1592]: time="2025-11-01T00:27:26.740196722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:26.777572 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:27:26.796747 containerd[1592]: time="2025-11-01T00:27:26.796676808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9nqw,Uid:675112ea-20ac-4b20-b92c-b74dc58b95cd,Namespace:calico-system,Attempt:1,} returns sandbox id \"a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48\"" Nov 1 00:27:26.819146 sshd[4611]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:26.823946 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:27:26.824784 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:46260.service: Deactivated successfully. Nov 1 00:27:26.828052 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:27:26.828874 systemd-logind[1577]: Removed session 9. Nov 1 00:27:26.898210 containerd[1592]: time="2025-11-01T00:27:26.898158964Z" level=info msg="StopPodSandbox for \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\"" Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.943 [INFO][4738] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.944 [INFO][4738] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" iface="eth0" netns="/var/run/netns/cni-570514ae-7e7b-11e7-9c5d-892dacbd68b6" Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.945 [INFO][4738] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" iface="eth0" netns="/var/run/netns/cni-570514ae-7e7b-11e7-9c5d-892dacbd68b6" Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.945 [INFO][4738] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" iface="eth0" netns="/var/run/netns/cni-570514ae-7e7b-11e7-9c5d-892dacbd68b6" Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.945 [INFO][4738] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.945 [INFO][4738] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.971 [INFO][4746] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" HandleID="k8s-pod-network.90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.971 [INFO][4746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.971 [INFO][4746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.979 [WARNING][4746] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" HandleID="k8s-pod-network.90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.979 [INFO][4746] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" HandleID="k8s-pod-network.90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.981 [INFO][4746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:26.988195 containerd[1592]: 2025-11-01 00:27:26.984 [INFO][4738] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:26.994522 containerd[1592]: time="2025-11-01T00:27:26.994472365Z" level=info msg="TearDown network for sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\" successfully" Nov 1 00:27:26.994522 containerd[1592]: time="2025-11-01T00:27:26.994514626Z" level=info msg="StopPodSandbox for \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\" returns successfully" Nov 1 00:27:26.995368 containerd[1592]: time="2025-11-01T00:27:26.995346743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ksf9n,Uid:41180a49-a14f-492f-9746-dfd093b11440,Namespace:calico-system,Attempt:1,}" Nov 1 00:27:27.054759 containerd[1592]: time="2025-11-01T00:27:27.054691866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:27.056327 containerd[1592]: time="2025-11-01T00:27:27.056248660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:27.056490 containerd[1592]: time="2025-11-01T00:27:27.056317142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:27.056625 kubelet[2713]: E1101 00:27:27.056550 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:27.056741 kubelet[2713]: E1101 00:27:27.056648 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:27.057286 kubelet[2713]: E1101 00:27:27.057216 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnnng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fc976b46c-8vsw4_calico-apiserver(4d8b0a34-66bd-4c22-a438-b5e5354489a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:27.058288 containerd[1592]: time="2025-11-01T00:27:27.057808028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:27:27.059068 kubelet[2713]: E1101 00:27:27.058975 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" podUID="4d8b0a34-66bd-4c22-a438-b5e5354489a4" Nov 1 00:27:27.072066 kubelet[2713]: E1101 00:27:27.070129 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:27.072066 kubelet[2713]: E1101 00:27:27.070232 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" podUID="4d8b0a34-66bd-4c22-a438-b5e5354489a4" Nov 1 00:27:27.237771 systemd[1]: run-netns-cni\x2d570514ae\x2d7e7b\x2d11e7\x2d9c5d\x2d892dacbd68b6.mount: Deactivated successfully. Nov 1 00:27:27.393425 containerd[1592]: time="2025-11-01T00:27:27.393352347Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:27.461999 containerd[1592]: time="2025-11-01T00:27:27.461821163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:27:27.461999 containerd[1592]: time="2025-11-01T00:27:27.461954141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:27:27.463566 kubelet[2713]: E1101 00:27:27.463067 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:27.463566 kubelet[2713]: E1101 00:27:27.463133 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:27.466519 kubelet[2713]: E1101 00:27:27.466432 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgnb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9nqw_calico-system(675112ea-20ac-4b20-b92c-b74dc58b95cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:27.468861 containerd[1592]: time="2025-11-01T00:27:27.468802853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:27:27.620059 systemd-networkd[1249]: calie446139c188: Link UP Nov 1 00:27:27.620430 systemd-networkd[1249]: calie446139c188: Gained carrier Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.055 [INFO][4754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--ksf9n-eth0 goldmane-666569f655- calico-system 41180a49-a14f-492f-9746-dfd093b11440 1086 0 2025-11-01 00:26:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-ksf9n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie446139c188 [] [] }} ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Namespace="calico-system" Pod="goldmane-666569f655-ksf9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ksf9n-" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.056 [INFO][4754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Namespace="calico-system" Pod="goldmane-666569f655-ksf9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.088 [INFO][4769] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" HandleID="k8s-pod-network.8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.088 [INFO][4769] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" HandleID="k8s-pod-network.8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-ksf9n", "timestamp":"2025-11-01 00:27:27.088488514 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.088 [INFO][4769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.088 [INFO][4769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.088 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.464 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" host="localhost" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.515 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.582 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.584 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.592 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.592 [INFO][4769] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" host="localhost" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.595 [INFO][4769] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681 Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.603 [INFO][4769] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" host="localhost" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.613 [INFO][4769] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" host="localhost" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.613 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" host="localhost" Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.613 [INFO][4769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:27.637493 containerd[1592]: 2025-11-01 00:27:27.613 [INFO][4769] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" HandleID="k8s-pod-network.8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:27.638512 containerd[1592]: 2025-11-01 00:27:27.617 [INFO][4754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Namespace="calico-system" Pod="goldmane-666569f655-ksf9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ksf9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ksf9n-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"41180a49-a14f-492f-9746-dfd093b11440", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-ksf9n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie446139c188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:27.638512 containerd[1592]: 2025-11-01 00:27:27.617 [INFO][4754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Namespace="calico-system" Pod="goldmane-666569f655-ksf9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:27.638512 containerd[1592]: 2025-11-01 00:27:27.617 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie446139c188 ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Namespace="calico-system" Pod="goldmane-666569f655-ksf9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:27.638512 containerd[1592]: 2025-11-01 00:27:27.621 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Namespace="calico-system" Pod="goldmane-666569f655-ksf9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:27.638512 containerd[1592]: 2025-11-01 00:27:27.622 [INFO][4754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Namespace="calico-system" Pod="goldmane-666569f655-ksf9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ksf9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ksf9n-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"41180a49-a14f-492f-9746-dfd093b11440", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681", Pod:"goldmane-666569f655-ksf9n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie446139c188", MAC:"16:2f:e6:d6:2f:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:27.638512 containerd[1592]: 2025-11-01 00:27:27.633 [INFO][4754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681" Namespace="calico-system" Pod="goldmane-666569f655-ksf9n" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:27.659328 containerd[1592]: time="2025-11-01T00:27:27.659201485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:27.659328 containerd[1592]: time="2025-11-01T00:27:27.659282081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:27.659328 containerd[1592]: time="2025-11-01T00:27:27.659298674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:27.659516 containerd[1592]: time="2025-11-01T00:27:27.659414669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:27.694184 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:27:27.714441 systemd-networkd[1249]: cali84ca4c83ae7: Gained IPv6LL Nov 1 00:27:27.725981 containerd[1592]: time="2025-11-01T00:27:27.725902207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ksf9n,Uid:41180a49-a14f-492f-9746-dfd093b11440,Namespace:calico-system,Attempt:1,} returns sandbox id \"8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681\"" Nov 1 00:27:27.842289 systemd-networkd[1249]: cali029ae4fc21b: Gained IPv6LL Nov 1 00:27:27.851462 containerd[1592]: time="2025-11-01T00:27:27.851405184Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:27.852606 containerd[1592]: time="2025-11-01T00:27:27.852550167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:27:27.852771 containerd[1592]: time="2025-11-01T00:27:27.852588733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:27:27.852926 kubelet[2713]: E1101 00:27:27.852864 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:27.852975 kubelet[2713]: E1101 00:27:27.852942 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:27.853339 kubelet[2713]: E1101 00:27:27.853257 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgnb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9nqw_calico-system(675112ea-20ac-4b20-b92c-b74dc58b95cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:27.853510 containerd[1592]: time="2025-11-01T00:27:27.853441208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:27:27.854569 kubelet[2713]: E1101 00:27:27.854513 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:27.896149 containerd[1592]: time="2025-11-01T00:27:27.896106620Z" level=info msg="StopPodSandbox for \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\"" Nov 1 00:27:27.896621 containerd[1592]: time="2025-11-01T00:27:27.896560622Z" level=info msg="StopPodSandbox for \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\"" Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.948 [INFO][4850] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.949 [INFO][4850] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" iface="eth0" netns="/var/run/netns/cni-a3f7a3ca-e923-3d3b-b4e4-e3e415982de5" Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.949 [INFO][4850] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" iface="eth0" netns="/var/run/netns/cni-a3f7a3ca-e923-3d3b-b4e4-e3e415982de5" Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.949 [INFO][4850] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" iface="eth0" netns="/var/run/netns/cni-a3f7a3ca-e923-3d3b-b4e4-e3e415982de5" Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.949 [INFO][4850] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.949 [INFO][4850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.976 [INFO][4868] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" HandleID="k8s-pod-network.f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.976 [INFO][4868] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.976 [INFO][4868] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.984 [WARNING][4868] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" HandleID="k8s-pod-network.f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.984 [INFO][4868] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" HandleID="k8s-pod-network.f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.987 [INFO][4868] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:27.993843 containerd[1592]: 2025-11-01 00:27:27.990 [INFO][4850] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:27.995273 containerd[1592]: time="2025-11-01T00:27:27.995094099Z" level=info msg="TearDown network for sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\" successfully" Nov 1 00:27:27.995273 containerd[1592]: time="2025-11-01T00:27:27.995132814Z" level=info msg="StopPodSandbox for \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\" returns successfully" Nov 1 00:27:27.996736 containerd[1592]: time="2025-11-01T00:27:27.996704967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fc45bf6-dz8ms,Uid:a7329f0c-4569-4192-ab57-1ba0d9bc5c3f,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:27:27.997991 systemd[1]: run-netns-cni\x2da3f7a3ca\x2de923\x2d3d3b\x2db4e4\x2de3e415982de5.mount: Deactivated successfully. Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:27.947 [INFO][4851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:27.947 [INFO][4851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" iface="eth0" netns="/var/run/netns/cni-75a8c472-806e-8b96-0d3c-ffcc9bf52f62" Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:27.947 [INFO][4851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" iface="eth0" netns="/var/run/netns/cni-75a8c472-806e-8b96-0d3c-ffcc9bf52f62" Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:27.947 [INFO][4851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" iface="eth0" netns="/var/run/netns/cni-75a8c472-806e-8b96-0d3c-ffcc9bf52f62" Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:27.947 [INFO][4851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:27.947 [INFO][4851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:27.980 [INFO][4866] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" HandleID="k8s-pod-network.520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:27.981 [INFO][4866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:27.987 [INFO][4866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:28.005 [WARNING][4866] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" HandleID="k8s-pod-network.520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:28.005 [INFO][4866] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" HandleID="k8s-pod-network.520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:28.008 [INFO][4866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:28.016333 containerd[1592]: 2025-11-01 00:27:28.012 [INFO][4851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:28.016333 containerd[1592]: time="2025-11-01T00:27:28.016301478Z" level=info msg="TearDown network for sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\" successfully" Nov 1 00:27:28.016333 containerd[1592]: time="2025-11-01T00:27:28.016327850Z" level=info msg="StopPodSandbox for \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\" returns successfully" Nov 1 00:27:28.017765 containerd[1592]: time="2025-11-01T00:27:28.017720772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847b98fc4d-cw68d,Uid:65680b46-920b-40e7-93fd-698ef81e20c8,Namespace:calico-system,Attempt:1,}" Nov 1 00:27:28.021128 systemd[1]: run-netns-cni\x2d75a8c472\x2d806e\x2d8b96\x2d0d3c\x2dffcc9bf52f62.mount: Deactivated successfully. Nov 1 00:27:28.075887 kubelet[2713]: E1101 00:27:28.075833 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" podUID="4d8b0a34-66bd-4c22-a438-b5e5354489a4" Nov 1 00:27:28.077149 kubelet[2713]: E1101 00:27:28.076545 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:28.160175 containerd[1592]: time="2025-11-01T00:27:28.159999363Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:28.162716 containerd[1592]: time="2025-11-01T00:27:28.162660186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:27:28.162826 containerd[1592]: time="2025-11-01T00:27:28.162752335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:28.163466 kubelet[2713]: E1101 00:27:28.163098 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:27:28.163466 kubelet[2713]: E1101 00:27:28.163192 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:27:28.163638 kubelet[2713]: E1101 00:27:28.163501 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpqjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ksf9n_calico-system(41180a49-a14f-492f-9746-dfd093b11440): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:28.165098 kubelet[2713]: E1101 00:27:28.164720 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ksf9n" podUID="41180a49-a14f-492f-9746-dfd093b11440" Nov 1 00:27:28.179518 systemd-networkd[1249]: caliefbf0ec9906: Link UP Nov 1 00:27:28.181985 systemd-networkd[1249]: caliefbf0ec9906: Gained carrier Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.066 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0 calico-apiserver-65fc45bf6- calico-apiserver a7329f0c-4569-4192-ab57-1ba0d9bc5c3f 1111 0 2025-11-01 00:26:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65fc45bf6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65fc45bf6-dz8ms eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliefbf0ec9906 [] [] }} ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-dz8ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.066 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-dz8ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.126 [INFO][4911] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" HandleID="k8s-pod-network.c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.127 [INFO][4911] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" HandleID="k8s-pod-network.c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65fc45bf6-dz8ms", "timestamp":"2025-11-01 00:27:28.126907186 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.127 [INFO][4911] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.127 [INFO][4911] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.127 [INFO][4911] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.135 [INFO][4911] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" host="localhost" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.142 [INFO][4911] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.147 [INFO][4911] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.150 [INFO][4911] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.153 [INFO][4911] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.153 [INFO][4911] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" host="localhost" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.156 [INFO][4911] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.161 [INFO][4911] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" host="localhost" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.169 [INFO][4911] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" host="localhost" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.169 [INFO][4911] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" host="localhost" Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.169 [INFO][4911] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:28.198976 containerd[1592]: 2025-11-01 00:27:28.169 [INFO][4911] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" HandleID="k8s-pod-network.c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:28.201159 containerd[1592]: 2025-11-01 00:27:28.173 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-dz8ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0", GenerateName:"calico-apiserver-65fc45bf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a7329f0c-4569-4192-ab57-1ba0d9bc5c3f", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fc45bf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65fc45bf6-dz8ms", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefbf0ec9906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:28.201159 containerd[1592]: 2025-11-01 00:27:28.174 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-dz8ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:28.201159 containerd[1592]: 2025-11-01 00:27:28.174 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefbf0ec9906 ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-dz8ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:28.201159 containerd[1592]: 2025-11-01 00:27:28.181 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-dz8ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:28.201159 containerd[1592]: 2025-11-01 00:27:28.182 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-dz8ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0", GenerateName:"calico-apiserver-65fc45bf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a7329f0c-4569-4192-ab57-1ba0d9bc5c3f", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fc45bf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a", Pod:"calico-apiserver-65fc45bf6-dz8ms", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefbf0ec9906", MAC:"52:0b:b0:56:5d:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:28.201159 containerd[1592]: 2025-11-01 00:27:28.195 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-dz8ms" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:28.226536 containerd[1592]: time="2025-11-01T00:27:28.224847087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:28.226536 containerd[1592]: time="2025-11-01T00:27:28.224932983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:28.226536 containerd[1592]: time="2025-11-01T00:27:28.224993701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:28.226536 containerd[1592]: time="2025-11-01T00:27:28.226140696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:28.272475 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:27:28.298960 systemd-networkd[1249]: cali8c7f529bf96: Link UP Nov 1 00:27:28.299924 systemd-networkd[1249]: cali8c7f529bf96: Gained carrier Nov 1 00:27:28.317983 containerd[1592]: time="2025-11-01T00:27:28.317874025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fc45bf6-dz8ms,Uid:a7329f0c-4569-4192-ab57-1ba0d9bc5c3f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a\"" Nov 1 00:27:28.319776 containerd[1592]: time="2025-11-01T00:27:28.319606136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.077 [INFO][4896] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0 calico-kube-controllers-847b98fc4d- calico-system 65680b46-920b-40e7-93fd-698ef81e20c8 1110 0 2025-11-01 00:27:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:847b98fc4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-847b98fc4d-cw68d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8c7f529bf96 [] [] }} ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Namespace="calico-system" Pod="calico-kube-controllers-847b98fc4d-cw68d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.077 [INFO][4896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Namespace="calico-system" Pod="calico-kube-controllers-847b98fc4d-cw68d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.127 [INFO][4919] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" HandleID="k8s-pod-network.e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.127 [INFO][4919] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" HandleID="k8s-pod-network.e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-847b98fc4d-cw68d", "timestamp":"2025-11-01 00:27:28.127540805 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.127 [INFO][4919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.170 [INFO][4919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.170 [INFO][4919] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.237 [INFO][4919] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" host="localhost" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.248 [INFO][4919] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.261 [INFO][4919] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.263 [INFO][4919] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.266 [INFO][4919] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.266 [INFO][4919] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" host="localhost" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.268 [INFO][4919] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50 Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.273 [INFO][4919] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" host="localhost" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.287 [INFO][4919] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" host="localhost" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.287 [INFO][4919] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" host="localhost" Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.288 [INFO][4919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:28.448083 containerd[1592]: 2025-11-01 00:27:28.288 [INFO][4919] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" HandleID="k8s-pod-network.e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:28.450774 containerd[1592]: 2025-11-01 00:27:28.293 [INFO][4896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Namespace="calico-system" Pod="calico-kube-controllers-847b98fc4d-cw68d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0", GenerateName:"calico-kube-controllers-847b98fc4d-", Namespace:"calico-system", SelfLink:"", UID:"65680b46-920b-40e7-93fd-698ef81e20c8", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 27, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847b98fc4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-847b98fc4d-cw68d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c7f529bf96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:28.450774 containerd[1592]: 2025-11-01 00:27:28.294 [INFO][4896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Namespace="calico-system" Pod="calico-kube-controllers-847b98fc4d-cw68d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:28.450774 containerd[1592]: 2025-11-01 00:27:28.294 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c7f529bf96 ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Namespace="calico-system" Pod="calico-kube-controllers-847b98fc4d-cw68d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:28.450774 containerd[1592]: 2025-11-01 00:27:28.300 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Namespace="calico-system" Pod="calico-kube-controllers-847b98fc4d-cw68d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:28.450774 containerd[1592]: 2025-11-01 00:27:28.300 [INFO][4896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Namespace="calico-system" Pod="calico-kube-controllers-847b98fc4d-cw68d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0", GenerateName:"calico-kube-controllers-847b98fc4d-", Namespace:"calico-system", SelfLink:"", UID:"65680b46-920b-40e7-93fd-698ef81e20c8", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 27, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847b98fc4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50", Pod:"calico-kube-controllers-847b98fc4d-cw68d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c7f529bf96", MAC:"d2:e9:80:e4:eb:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:28.450774 containerd[1592]: 2025-11-01 00:27:28.443 [INFO][4896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50" Namespace="calico-system" Pod="calico-kube-controllers-847b98fc4d-cw68d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:28.475561 containerd[1592]: time="2025-11-01T00:27:28.475285558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:28.475561 containerd[1592]: time="2025-11-01T00:27:28.475346226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:28.475561 containerd[1592]: time="2025-11-01T00:27:28.475356947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:28.475561 containerd[1592]: time="2025-11-01T00:27:28.475466530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:28.508137 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:27:28.545432 containerd[1592]: time="2025-11-01T00:27:28.545376445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847b98fc4d-cw68d,Uid:65680b46-920b-40e7-93fd-698ef81e20c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50\"" Nov 1 00:27:28.709735 containerd[1592]: time="2025-11-01T00:27:28.709564276Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:28.710965 containerd[1592]: time="2025-11-01T00:27:28.710905619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:28.711152 containerd[1592]: time="2025-11-01T00:27:28.711041513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:28.711311 kubelet[2713]: E1101 00:27:28.711236 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:28.711311 kubelet[2713]: E1101 00:27:28.711306 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:28.711808 kubelet[2713]: E1101 00:27:28.711563 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvlq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65fc45bf6-dz8ms_calico-apiserver(a7329f0c-4569-4192-ab57-1ba0d9bc5c3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:28.712341 containerd[1592]: time="2025-11-01T00:27:28.712083474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:27:28.712737 kubelet[2713]: E1101 00:27:28.712671 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" podUID="a7329f0c-4569-4192-ab57-1ba0d9bc5c3f" Nov 1 00:27:28.930259 systemd-networkd[1249]: calie446139c188: Gained IPv6LL Nov 1 00:27:29.014873 containerd[1592]: time="2025-11-01T00:27:29.014669122Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:29.025217 containerd[1592]: time="2025-11-01T00:27:29.025134289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:27:29.025217 containerd[1592]: time="2025-11-01T00:27:29.025190137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:27:29.025477 kubelet[2713]: E1101 00:27:29.025419 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:27:29.025534 kubelet[2713]: E1101 00:27:29.025484 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:27:29.025678 kubelet[2713]: E1101 00:27:29.025614 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ct9s2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-847b98fc4d-cw68d_calico-system(65680b46-920b-40e7-93fd-698ef81e20c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:29.027568 kubelet[2713]: E1101 00:27:29.027537 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" podUID="65680b46-920b-40e7-93fd-698ef81e20c8" Nov 1 00:27:29.079046 kubelet[2713]: E1101 00:27:29.078784 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" podUID="a7329f0c-4569-4192-ab57-1ba0d9bc5c3f" Nov 1 00:27:29.080875 kubelet[2713]: E1101 00:27:29.080044 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ksf9n" podUID="41180a49-a14f-492f-9746-dfd093b11440" Nov 1 00:27:29.080875 kubelet[2713]: E1101 00:27:29.080749 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" podUID="65680b46-920b-40e7-93fd-698ef81e20c8" Nov 1 00:27:29.378237 systemd-networkd[1249]: cali8c7f529bf96: Gained IPv6LL Nov 1 00:27:29.506391 systemd-networkd[1249]: caliefbf0ec9906: Gained IPv6LL Nov 1 00:27:29.896351 containerd[1592]: time="2025-11-01T00:27:29.895979915Z" level=info msg="StopPodSandbox for \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\"" Nov 1 00:27:29.896351 containerd[1592]: time="2025-11-01T00:27:29.895979945Z" level=info msg="StopPodSandbox for \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\"" Nov 1 00:27:29.896351 containerd[1592]: time="2025-11-01T00:27:29.895996797Z" level=info msg="StopPodSandbox for \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\"" Nov 1 00:27:30.082387 kubelet[2713]: E1101 00:27:30.082325 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" podUID="65680b46-920b-40e7-93fd-698ef81e20c8" Nov 1 00:27:30.082387 kubelet[2713]: E1101 00:27:30.082345 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" podUID="a7329f0c-4569-4192-ab57-1ba0d9bc5c3f" Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.248 [INFO][5069] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.248 [INFO][5069] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" iface="eth0" netns="/var/run/netns/cni-f1854dbd-af5a-ffd5-1a5f-dc53586d2cc2" Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.249 [INFO][5069] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" iface="eth0" netns="/var/run/netns/cni-f1854dbd-af5a-ffd5-1a5f-dc53586d2cc2" Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.249 [INFO][5069] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" iface="eth0" netns="/var/run/netns/cni-f1854dbd-af5a-ffd5-1a5f-dc53586d2cc2" Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.249 [INFO][5069] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.249 [INFO][5069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.283 [INFO][5098] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" HandleID="k8s-pod-network.5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.284 [INFO][5098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.284 [INFO][5098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.323 [WARNING][5098] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" HandleID="k8s-pod-network.5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.323 [INFO][5098] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" HandleID="k8s-pod-network.5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.326 [INFO][5098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:30.333430 containerd[1592]: 2025-11-01 00:27:30.330 [INFO][5069] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:30.334339 containerd[1592]: time="2025-11-01T00:27:30.333630095Z" level=info msg="TearDown network for sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\" successfully" Nov 1 00:27:30.334339 containerd[1592]: time="2025-11-01T00:27:30.333682907Z" level=info msg="StopPodSandbox for \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\" returns successfully" Nov 1 00:27:30.334412 kubelet[2713]: E1101 00:27:30.334084 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:30.336598 containerd[1592]: time="2025-11-01T00:27:30.334596107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mfl96,Uid:bd83f5e8-3d82-42fe-a0b0-5807c8a2598f,Namespace:kube-system,Attempt:1,}" Nov 1 00:27:30.338992 systemd[1]: run-netns-cni\x2df1854dbd\x2daf5a\x2dffd5\x2d1a5f\x2ddc53586d2cc2.mount: Deactivated successfully. Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.246 [INFO][5074] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.246 [INFO][5074] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" iface="eth0" netns="/var/run/netns/cni-d6a399ef-b79e-48a6-d2bc-a9c803174502" Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.248 [INFO][5074] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" iface="eth0" netns="/var/run/netns/cni-d6a399ef-b79e-48a6-d2bc-a9c803174502" Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.249 [INFO][5074] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" iface="eth0" netns="/var/run/netns/cni-d6a399ef-b79e-48a6-d2bc-a9c803174502" Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.249 [INFO][5074] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.249 [INFO][5074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.299 [INFO][5102] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" HandleID="k8s-pod-network.c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.299 [INFO][5102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.326 [INFO][5102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.340 [WARNING][5102] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" HandleID="k8s-pod-network.c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.341 [INFO][5102] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" HandleID="k8s-pod-network.c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.347 [INFO][5102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:30.355997 containerd[1592]: 2025-11-01 00:27:30.352 [INFO][5074] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:30.364288 systemd[1]: run-netns-cni\x2dd6a399ef\x2db79e\x2d48a6\x2dd2bc\x2da9c803174502.mount: Deactivated successfully. Nov 1 00:27:30.365291 containerd[1592]: time="2025-11-01T00:27:30.365240045Z" level=info msg="TearDown network for sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\" successfully" Nov 1 00:27:30.365291 containerd[1592]: time="2025-11-01T00:27:30.365276506Z" level=info msg="StopPodSandbox for \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\" returns successfully" Nov 1 00:27:30.366258 containerd[1592]: time="2025-11-01T00:27:30.366236325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fc45bf6-gxk8k,Uid:8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.245 [INFO][5079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.246 [INFO][5079] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" iface="eth0" netns="/var/run/netns/cni-9358392e-eeac-eec4-f998-9f865d791ad6" Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.247 [INFO][5079] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" iface="eth0" netns="/var/run/netns/cni-9358392e-eeac-eec4-f998-9f865d791ad6" Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.249 [INFO][5079] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" iface="eth0" netns="/var/run/netns/cni-9358392e-eeac-eec4-f998-9f865d791ad6" Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.249 [INFO][5079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.249 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.301 [INFO][5101] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" HandleID="k8s-pod-network.6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.301 [INFO][5101] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.348 [INFO][5101] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.357 [WARNING][5101] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" HandleID="k8s-pod-network.6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.357 [INFO][5101] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" HandleID="k8s-pod-network.6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.359 [INFO][5101] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:30.370468 containerd[1592]: 2025-11-01 00:27:30.367 [INFO][5079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:30.371871 containerd[1592]: time="2025-11-01T00:27:30.371118531Z" level=info msg="TearDown network for sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\" successfully" Nov 1 00:27:30.371871 containerd[1592]: time="2025-11-01T00:27:30.371141695Z" level=info msg="StopPodSandbox for \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\" returns successfully" Nov 1 00:27:30.371955 kubelet[2713]: E1101 00:27:30.371439 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:30.372177 containerd[1592]: time="2025-11-01T00:27:30.372155419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kjzn2,Uid:e3bbd1b7-2cec-41ab-97aa-54499c93466d,Namespace:kube-system,Attempt:1,}" Nov 1 00:27:30.376155 systemd[1]: run-netns-cni\x2d9358392e\x2deeac\x2deec4\x2df998\x2d9f865d791ad6.mount: Deactivated successfully. Nov 1 00:27:30.752554 systemd-networkd[1249]: calif314535f4fb: Link UP Nov 1 00:27:30.755353 systemd-networkd[1249]: calif314535f4fb: Gained carrier Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.449 [INFO][5122] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mfl96-eth0 coredns-668d6bf9bc- kube-system bd83f5e8-3d82-42fe-a0b0-5807c8a2598f 1170 0 2025-11-01 00:26:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mfl96 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif314535f4fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Namespace="kube-system" Pod="coredns-668d6bf9bc-mfl96" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mfl96-" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.449 [INFO][5122] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Namespace="kube-system" Pod="coredns-668d6bf9bc-mfl96" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.489 [INFO][5162] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" HandleID="k8s-pod-network.b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.490 [INFO][5162] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" HandleID="k8s-pod-network.b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ff60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mfl96", "timestamp":"2025-11-01 00:27:30.489456038 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.490 [INFO][5162] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.490 [INFO][5162] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.492 [INFO][5162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.500 [INFO][5162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" host="localhost" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.507 [INFO][5162] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.513 [INFO][5162] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.516 [INFO][5162] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.520 [INFO][5162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.520 [INFO][5162] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" host="localhost" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.522 [INFO][5162] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.629 [INFO][5162] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" host="localhost" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.743 [INFO][5162] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" host="localhost" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.743 [INFO][5162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" host="localhost" Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.743 [INFO][5162] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:30.805638 containerd[1592]: 2025-11-01 00:27:30.743 [INFO][5162] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" HandleID="k8s-pod-network.b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:30.806430 containerd[1592]: 2025-11-01 00:27:30.748 [INFO][5122] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Namespace="kube-system" Pod="coredns-668d6bf9bc-mfl96" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mfl96-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd83f5e8-3d82-42fe-a0b0-5807c8a2598f", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mfl96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif314535f4fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:30.806430 containerd[1592]: 2025-11-01 00:27:30.748 [INFO][5122] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Namespace="kube-system" Pod="coredns-668d6bf9bc-mfl96" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:30.806430 containerd[1592]: 2025-11-01 00:27:30.748 [INFO][5122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif314535f4fb ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Namespace="kube-system" Pod="coredns-668d6bf9bc-mfl96" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:30.806430 containerd[1592]: 2025-11-01 00:27:30.752 [INFO][5122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Namespace="kube-system" Pod="coredns-668d6bf9bc-mfl96" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:30.806430 containerd[1592]: 2025-11-01 00:27:30.753 [INFO][5122] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Namespace="kube-system" Pod="coredns-668d6bf9bc-mfl96" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mfl96-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd83f5e8-3d82-42fe-a0b0-5807c8a2598f", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc", Pod:"coredns-668d6bf9bc-mfl96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif314535f4fb", MAC:"e6:ea:9d:b2:36:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:30.806430 containerd[1592]: 2025-11-01 00:27:30.781 [INFO][5122] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc" Namespace="kube-system" Pod="coredns-668d6bf9bc-mfl96" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:30.866202 containerd[1592]: time="2025-11-01T00:27:30.866089956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:30.866202 containerd[1592]: time="2025-11-01T00:27:30.866155092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:30.866202 containerd[1592]: time="2025-11-01T00:27:30.866168488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:30.866390 containerd[1592]: time="2025-11-01T00:27:30.866280193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:30.913340 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:27:30.953566 containerd[1592]: time="2025-11-01T00:27:30.953526176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mfl96,Uid:bd83f5e8-3d82-42fe-a0b0-5807c8a2598f,Namespace:kube-system,Attempt:1,} returns sandbox id \"b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc\"" Nov 1 00:27:30.954293 kubelet[2713]: E1101 00:27:30.954263 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:30.956149 containerd[1592]: time="2025-11-01T00:27:30.955912969Z" level=info msg="CreateContainer within sandbox \"b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:27:31.752845 systemd-networkd[1249]: cali89c2e5335c6: Link UP Nov 1 00:27:31.753240 systemd-networkd[1249]: cali89c2e5335c6: Gained carrier Nov 1 00:27:31.834417 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:46268.service - OpenSSH per-connection server daemon (10.0.0.1:46268). Nov 1 00:27:31.872184 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 46268 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:31.874329 systemd-networkd[1249]: calif314535f4fb: Gained IPv6LL Nov 1 00:27:31.874692 sshd[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:31.879826 systemd-logind[1577]: New session 10 of user core. Nov 1 00:27:31.889671 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:27:32.038537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262676033.mount: Deactivated successfully. Nov 1 00:27:32.043235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4225803787.mount: Deactivated successfully. Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.476 [INFO][5138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0 calico-apiserver-65fc45bf6- calico-apiserver 8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8 1169 0 2025-11-01 00:26:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65fc45bf6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65fc45bf6-gxk8k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89c2e5335c6 [] [] }} ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-gxk8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.476 [INFO][5138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-gxk8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.517 [INFO][5173] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" HandleID="k8s-pod-network.1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.517 [INFO][5173] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" HandleID="k8s-pod-network.1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000185e20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65fc45bf6-gxk8k", "timestamp":"2025-11-01 00:27:30.517073696 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.517 [INFO][5173] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.743 [INFO][5173] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.743 [INFO][5173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.755 [INFO][5173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" host="localhost" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.772 [INFO][5173] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.802 [INFO][5173] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.823 [INFO][5173] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.835 [INFO][5173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.835 [INFO][5173] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" host="localhost" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:30.886 [INFO][5173] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:31.204 [INFO][5173] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" host="localhost" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:31.746 [INFO][5173] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" host="localhost" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:31.746 [INFO][5173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" host="localhost" Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:31.746 [INFO][5173] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:32.150266 containerd[1592]: 2025-11-01 00:27:31.746 [INFO][5173] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" HandleID="k8s-pod-network.1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:32.153787 containerd[1592]: 2025-11-01 00:27:31.749 [INFO][5138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-gxk8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0", GenerateName:"calico-apiserver-65fc45bf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fc45bf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65fc45bf6-gxk8k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89c2e5335c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:32.153787 containerd[1592]: 2025-11-01 00:27:31.749 [INFO][5138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-gxk8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:32.153787 containerd[1592]: 2025-11-01 00:27:31.749 [INFO][5138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89c2e5335c6 ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-gxk8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:32.153787 containerd[1592]: 2025-11-01 00:27:31.753 [INFO][5138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-gxk8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:32.153787 containerd[1592]: 2025-11-01 00:27:31.754 [INFO][5138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-gxk8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0", GenerateName:"calico-apiserver-65fc45bf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fc45bf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c", Pod:"calico-apiserver-65fc45bf6-gxk8k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89c2e5335c6", MAC:"26:35:44:fd:46:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:32.153787 containerd[1592]: 2025-11-01 00:27:32.146 [INFO][5138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c" Namespace="calico-apiserver" Pod="calico-apiserver-65fc45bf6-gxk8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:32.220439 sshd[5245]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:32.223101 systemd-networkd[1249]: cali6b06f2037f4: Link UP Nov 1 00:27:32.228130 systemd-networkd[1249]: cali6b06f2037f4: Gained carrier Nov 1 00:27:32.229297 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:46268.service: Deactivated successfully. Nov 1 00:27:32.234258 containerd[1592]: time="2025-11-01T00:27:32.234125244Z" level=info msg="CreateContainer within sandbox \"b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1df3ab03838a74d188f9d712f790637f957835b1f41ab0d9fe83670eb6a36816\"" Nov 1 00:27:32.235006 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:27:32.236460 containerd[1592]: time="2025-11-01T00:27:32.235698946Z" level=info msg="StartContainer for \"1df3ab03838a74d188f9d712f790637f957835b1f41ab0d9fe83670eb6a36816\"" Nov 1 00:27:32.241287 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:27:32.243508 systemd-logind[1577]: Removed session 10. Nov 1 00:27:32.256447 containerd[1592]: time="2025-11-01T00:27:32.256120810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:32.256447 containerd[1592]: time="2025-11-01T00:27:32.256183260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:32.256447 containerd[1592]: time="2025-11-01T00:27:32.256195634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:32.256447 containerd[1592]: time="2025-11-01T00:27:32.256303372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:30.478 [INFO][5148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0 coredns-668d6bf9bc- kube-system e3bbd1b7-2cec-41ab-97aa-54499c93466d 1168 0 2025-11-01 00:26:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-kjzn2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b06f2037f4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzn2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzn2-" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:30.479 [INFO][5148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzn2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:30.524 [INFO][5176] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" HandleID="k8s-pod-network.c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:30.524 [INFO][5176] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" HandleID="k8s-pod-network.c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-kjzn2", "timestamp":"2025-11-01 00:27:30.524401257 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:30.524 [INFO][5176] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:31.746 [INFO][5176] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:31.746 [INFO][5176] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.150 [INFO][5176] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" host="localhost" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.158 [INFO][5176] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.162 [INFO][5176] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.164 [INFO][5176] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.167 [INFO][5176] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.167 [INFO][5176] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" host="localhost" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.169 [INFO][5176] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085 Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.196 [INFO][5176] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" host="localhost" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.213 [INFO][5176] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" host="localhost" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.213 [INFO][5176] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" host="localhost" Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.214 [INFO][5176] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:32.285111 containerd[1592]: 2025-11-01 00:27:32.214 [INFO][5176] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" HandleID="k8s-pod-network.c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:32.285791 containerd[1592]: 2025-11-01 00:27:32.219 [INFO][5148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzn2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3bbd1b7-2cec-41ab-97aa-54499c93466d", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-kjzn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b06f2037f4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:32.285791 containerd[1592]: 2025-11-01 00:27:32.219 [INFO][5148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzn2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:32.285791 containerd[1592]: 2025-11-01 00:27:32.220 [INFO][5148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b06f2037f4 ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzn2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:32.285791 containerd[1592]: 2025-11-01 00:27:32.223 [INFO][5148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzn2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:32.285791 containerd[1592]: 2025-11-01 00:27:32.224 [INFO][5148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzn2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3bbd1b7-2cec-41ab-97aa-54499c93466d", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085", Pod:"coredns-668d6bf9bc-kjzn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b06f2037f4", MAC:"1a:fa:cc:9b:10:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:32.285791 containerd[1592]: 2025-11-01 00:27:32.252 [INFO][5148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzn2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:32.294751 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:27:32.323675 containerd[1592]: time="2025-11-01T00:27:32.323523896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:32.323675 containerd[1592]: time="2025-11-01T00:27:32.323627396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:32.324585 containerd[1592]: time="2025-11-01T00:27:32.323642656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:32.324585 containerd[1592]: time="2025-11-01T00:27:32.323936784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:32.360615 containerd[1592]: time="2025-11-01T00:27:32.360564340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65fc45bf6-gxk8k,Uid:8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c\"" Nov 1 00:27:32.363761 containerd[1592]: time="2025-11-01T00:27:32.363497681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:32.379996 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:27:32.417507 containerd[1592]: time="2025-11-01T00:27:32.417208525Z" level=info msg="StartContainer for \"1df3ab03838a74d188f9d712f790637f957835b1f41ab0d9fe83670eb6a36816\" returns successfully" Nov 1 00:27:32.419085 containerd[1592]: time="2025-11-01T00:27:32.419005078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kjzn2,Uid:e3bbd1b7-2cec-41ab-97aa-54499c93466d,Namespace:kube-system,Attempt:1,} returns sandbox id \"c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085\"" Nov 1 00:27:32.420064 kubelet[2713]: E1101 00:27:32.419805 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:32.422235 containerd[1592]: time="2025-11-01T00:27:32.422204505Z" level=info msg="CreateContainer within sandbox \"c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:27:32.453382 containerd[1592]: time="2025-11-01T00:27:32.453297088Z" level=info msg="CreateContainer within sandbox \"c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71c0f5ee91ad02dec840969f7608785f8e68e93b7fccea6353aa4b427b22996e\"" Nov 1 00:27:32.454112 containerd[1592]: time="2025-11-01T00:27:32.454078078Z" level=info msg="StartContainer for \"71c0f5ee91ad02dec840969f7608785f8e68e93b7fccea6353aa4b427b22996e\"" Nov 1 00:27:32.696475 containerd[1592]: time="2025-11-01T00:27:32.696412775Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:32.843606 containerd[1592]: time="2025-11-01T00:27:32.843541682Z" level=info msg="StartContainer for \"71c0f5ee91ad02dec840969f7608785f8e68e93b7fccea6353aa4b427b22996e\" returns successfully" Nov 1 00:27:32.932974 containerd[1592]: time="2025-11-01T00:27:32.932886732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:32.932974 containerd[1592]: time="2025-11-01T00:27:32.932909024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:32.933310 kubelet[2713]: E1101 00:27:32.933247 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:32.933385 kubelet[2713]: E1101 00:27:32.933320 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:32.933637 kubelet[2713]: E1101 00:27:32.933486 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw949,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65fc45bf6-gxk8k_calico-apiserver(8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:32.934763 kubelet[2713]: E1101 00:27:32.934694 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" podUID="8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8" Nov 1 00:27:33.090707 systemd-networkd[1249]: cali89c2e5335c6: Gained IPv6LL Nov 1 00:27:33.093722 kubelet[2713]: E1101 00:27:33.093672 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" podUID="8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8" Nov 1 00:27:33.094860 kubelet[2713]: E1101 00:27:33.094842 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:33.096653 kubelet[2713]: E1101 00:27:33.096637 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:33.620530 kubelet[2713]: I1101 00:27:33.620418 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mfl96" podStartSLOduration=48.620395748 podStartE2EDuration="48.620395748s" podCreationTimestamp="2025-11-01 00:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:27:33.618617864 +0000 UTC m=+52.808338383" watchObservedRunningTime="2025-11-01 00:27:33.620395748 +0000 UTC m=+52.810116258" Nov 1 00:27:33.642469 kubelet[2713]: I1101 00:27:33.642373 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kjzn2" podStartSLOduration=48.642348738 podStartE2EDuration="48.642348738s" podCreationTimestamp="2025-11-01 00:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:27:33.641104243 +0000 UTC m=+52.830824753" watchObservedRunningTime="2025-11-01 00:27:33.642348738 +0000 UTC m=+52.832069257" Nov 1 00:27:33.666251 systemd-networkd[1249]: cali6b06f2037f4: Gained IPv6LL Nov 1 00:27:34.100547 kubelet[2713]: E1101 00:27:34.100496 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:34.101070 kubelet[2713]: E1101 00:27:34.101000 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" podUID="8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8" Nov 1 00:27:34.101070 kubelet[2713]: E1101 00:27:34.101046 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:35.101900 kubelet[2713]: E1101 00:27:35.101863 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:35.102432 kubelet[2713]: E1101 00:27:35.101940 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:35.897339 containerd[1592]: time="2025-11-01T00:27:35.897091806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:27:36.034303 systemd-resolved[1476]: Under memory pressure, flushing caches. Nov 1 00:27:36.034329 systemd-resolved[1476]: Flushed all caches. Nov 1 00:27:36.037065 systemd-journald[1151]: Under memory pressure, flushing caches. Nov 1 00:27:36.103356 kubelet[2713]: E1101 00:27:36.103320 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:36.344182 containerd[1592]: time="2025-11-01T00:27:36.343995371Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:36.465981 containerd[1592]: time="2025-11-01T00:27:36.465888367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:27:36.465981 containerd[1592]: time="2025-11-01T00:27:36.465947842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:27:36.466326 kubelet[2713]: E1101 00:27:36.466250 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:27:36.466389 kubelet[2713]: E1101 00:27:36.466323 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:27:36.466587 kubelet[2713]: E1101 00:27:36.466537 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2e2a5e81cfcc4cb2aa655a270487e254,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rghb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67d8fdc769-sqbcf_calico-system(38e58461-b45b-46ed-b68d-38eb9fdd6911): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:36.469182 containerd[1592]: time="2025-11-01T00:27:36.468782264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:27:36.881417 containerd[1592]: time="2025-11-01T00:27:36.881334673Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:36.900780 containerd[1592]: time="2025-11-01T00:27:36.900709454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:27:36.901384 containerd[1592]: time="2025-11-01T00:27:36.900769920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:27:36.901422 kubelet[2713]: E1101 00:27:36.900967 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:27:36.901422 kubelet[2713]: E1101 00:27:36.901041 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:27:36.901422 kubelet[2713]: E1101 00:27:36.901180 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rghb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67d8fdc769-sqbcf_calico-system(38e58461-b45b-46ed-b68d-38eb9fdd6911): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:36.902663 kubelet[2713]: E1101 00:27:36.902405 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67d8fdc769-sqbcf" podUID="38e58461-b45b-46ed-b68d-38eb9fdd6911" Nov 1 00:27:37.227591 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:40598.service - OpenSSH per-connection server daemon (10.0.0.1:40598). Nov 1 00:27:37.278269 sshd[5456]: Accepted publickey for core from 10.0.0.1 port 40598 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:37.280980 sshd[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:37.287281 systemd-logind[1577]: New session 11 of user core. Nov 1 00:27:37.294420 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:27:37.434295 sshd[5456]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:37.448322 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:40604.service - OpenSSH per-connection server daemon (10.0.0.1:40604). Nov 1 00:27:37.449130 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:40598.service: Deactivated successfully. Nov 1 00:27:37.452214 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:27:37.454054 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:27:37.455842 systemd-logind[1577]: Removed session 11. Nov 1 00:27:37.476840 sshd[5473]: Accepted publickey for core from 10.0.0.1 port 40604 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:37.478484 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:37.483108 systemd-logind[1577]: New session 12 of user core. Nov 1 00:27:37.488396 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:27:37.660619 sshd[5473]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:37.671341 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:40612.service - OpenSSH per-connection server daemon (10.0.0.1:40612). Nov 1 00:27:37.675279 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:40604.service: Deactivated successfully. Nov 1 00:27:37.688551 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:27:37.694613 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:27:37.698218 systemd-logind[1577]: Removed session 12. Nov 1 00:27:37.717557 sshd[5485]: Accepted publickey for core from 10.0.0.1 port 40612 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:37.719718 sshd[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:37.727320 systemd-logind[1577]: New session 13 of user core. Nov 1 00:27:37.738344 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:27:37.870212 sshd[5485]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:37.875014 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:40612.service: Deactivated successfully. Nov 1 00:27:37.877924 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:27:37.878755 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:27:37.879741 systemd-logind[1577]: Removed session 13. Nov 1 00:27:38.082473 systemd-resolved[1476]: Under memory pressure, flushing caches. Nov 1 00:27:38.082492 systemd-resolved[1476]: Flushed all caches. Nov 1 00:27:38.085055 systemd-journald[1151]: Under memory pressure, flushing caches. Nov 1 00:27:38.897476 containerd[1592]: time="2025-11-01T00:27:38.897200608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:39.212069 containerd[1592]: time="2025-11-01T00:27:39.211806938Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:39.213991 containerd[1592]: time="2025-11-01T00:27:39.213235507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:39.213991 containerd[1592]: time="2025-11-01T00:27:39.213288989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:39.214241 kubelet[2713]: E1101 00:27:39.213623 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:39.214241 kubelet[2713]: E1101 00:27:39.213699 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:39.214241 kubelet[2713]: E1101 00:27:39.213868 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnnng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fc976b46c-8vsw4_calico-apiserver(4d8b0a34-66bd-4c22-a438-b5e5354489a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:39.215548 kubelet[2713]: E1101 00:27:39.215480 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" podUID="4d8b0a34-66bd-4c22-a438-b5e5354489a4" Nov 1 00:27:39.896571 containerd[1592]: time="2025-11-01T00:27:39.896509336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:27:40.387052 containerd[1592]: time="2025-11-01T00:27:40.386820020Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:40.388492 containerd[1592]: time="2025-11-01T00:27:40.388455353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:27:40.388579 containerd[1592]: time="2025-11-01T00:27:40.388533544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:27:40.388758 kubelet[2713]: E1101 00:27:40.388704 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:40.389172 kubelet[2713]: E1101 00:27:40.388775 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:40.389172 kubelet[2713]: E1101 00:27:40.388915 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgnb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9nqw_calico-system(675112ea-20ac-4b20-b92c-b74dc58b95cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:40.391402 containerd[1592]: time="2025-11-01T00:27:40.391367881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:27:40.706217 containerd[1592]: time="2025-11-01T00:27:40.706150071Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:40.756933 containerd[1592]: time="2025-11-01T00:27:40.756843473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:27:40.757146 containerd[1592]: time="2025-11-01T00:27:40.756908498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:27:40.757268 kubelet[2713]: E1101 00:27:40.757194 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:40.757322 kubelet[2713]: E1101 00:27:40.757272 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:40.757448 kubelet[2713]: E1101 00:27:40.757398 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgnb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9nqw_calico-system(675112ea-20ac-4b20-b92c-b74dc58b95cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:40.758708 kubelet[2713]: E1101 00:27:40.758640 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:40.882641 containerd[1592]: time="2025-11-01T00:27:40.882594292Z" level=info msg="StopPodSandbox for \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\"" Nov 1 00:27:40.897510 containerd[1592]: time="2025-11-01T00:27:40.897437120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:40.990 [WARNING][5514] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ksf9n-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"41180a49-a14f-492f-9746-dfd093b11440", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681", Pod:"goldmane-666569f655-ksf9n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie446139c188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:40.991 [INFO][5514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:40.991 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" iface="eth0" netns="" Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:40.991 [INFO][5514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:40.991 [INFO][5514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:41.018 [INFO][5525] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" HandleID="k8s-pod-network.90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:41.019 [INFO][5525] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:41.019 [INFO][5525] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:41.182 [WARNING][5525] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" HandleID="k8s-pod-network.90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:41.182 [INFO][5525] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" HandleID="k8s-pod-network.90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:41.184 [INFO][5525] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:41.191097 containerd[1592]: 2025-11-01 00:27:41.187 [INFO][5514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:41.191602 containerd[1592]: time="2025-11-01T00:27:41.191136125Z" level=info msg="TearDown network for sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\" successfully" Nov 1 00:27:41.191602 containerd[1592]: time="2025-11-01T00:27:41.191166853Z" level=info msg="StopPodSandbox for \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\" returns successfully" Nov 1 00:27:41.191875 containerd[1592]: time="2025-11-01T00:27:41.191831541Z" level=info msg="RemovePodSandbox for \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\"" Nov 1 00:27:41.194102 containerd[1592]: time="2025-11-01T00:27:41.194062536Z" level=info msg="Forcibly stopping sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\"" Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.232 [WARNING][5542] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ksf9n-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"41180a49-a14f-492f-9746-dfd093b11440", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8cdfd6202cd48b0bf107d294eff13bdf5265714e285d2077ae693838da1d6681", Pod:"goldmane-666569f655-ksf9n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie446139c188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.232 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.232 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" iface="eth0" netns="" Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.232 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.232 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.264 [INFO][5550] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" HandleID="k8s-pod-network.90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.264 [INFO][5550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.264 [INFO][5550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.270 [WARNING][5550] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" HandleID="k8s-pod-network.90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.270 [INFO][5550] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" HandleID="k8s-pod-network.90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Workload="localhost-k8s-goldmane--666569f655--ksf9n-eth0" Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.272 [INFO][5550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:41.279165 containerd[1592]: 2025-11-01 00:27:41.275 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7" Nov 1 00:27:41.279744 containerd[1592]: time="2025-11-01T00:27:41.279217234Z" level=info msg="TearDown network for sandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\" successfully" Nov 1 00:27:41.485312 containerd[1592]: time="2025-11-01T00:27:41.485013595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:41.485312 containerd[1592]: time="2025-11-01T00:27:41.485155847Z" level=info msg="RemovePodSandbox \"90b05b84959d1bb88da14ece3bf4d05bee584a6eb80a5eeaf04cc111e14d40a7\" returns successfully" Nov 1 00:27:41.485819 containerd[1592]: time="2025-11-01T00:27:41.485778835Z" level=info msg="StopPodSandbox for \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\"" Nov 1 00:27:41.500967 containerd[1592]: time="2025-11-01T00:27:41.500898976Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.524 [WARNING][5567] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0", GenerateName:"calico-apiserver-5fc976b46c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d8b0a34-66bd-4c22-a438-b5e5354489a4", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc976b46c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54", Pod:"calico-apiserver-5fc976b46c-8vsw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali029ae4fc21b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.524 [INFO][5567] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.524 [INFO][5567] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" iface="eth0" netns="" Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.524 [INFO][5567] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.524 [INFO][5567] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.547 [INFO][5575] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" HandleID="k8s-pod-network.d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.548 [INFO][5575] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.548 [INFO][5575] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.553 [WARNING][5575] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" HandleID="k8s-pod-network.d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.553 [INFO][5575] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" HandleID="k8s-pod-network.d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.554 [INFO][5575] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:41.560310 containerd[1592]: 2025-11-01 00:27:41.556 [INFO][5567] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:41.560970 containerd[1592]: time="2025-11-01T00:27:41.560346108Z" level=info msg="TearDown network for sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\" successfully" Nov 1 00:27:41.560970 containerd[1592]: time="2025-11-01T00:27:41.560374002Z" level=info msg="StopPodSandbox for \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\" returns successfully" Nov 1 00:27:41.560970 containerd[1592]: time="2025-11-01T00:27:41.560882779Z" level=info msg="RemovePodSandbox for \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\"" Nov 1 00:27:41.560970 containerd[1592]: time="2025-11-01T00:27:41.560920111Z" level=info msg="Forcibly stopping sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\"" Nov 1 00:27:41.562304 containerd[1592]: time="2025-11-01T00:27:41.562257079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:41.562716 containerd[1592]: time="2025-11-01T00:27:41.562311965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:41.562766 kubelet[2713]: E1101 00:27:41.562430 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:41.562766 kubelet[2713]: E1101 00:27:41.562485 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:41.562766 kubelet[2713]: E1101 00:27:41.562653 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvlq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65fc45bf6-dz8ms_calico-apiserver(a7329f0c-4569-4192-ab57-1ba0d9bc5c3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:41.563869 kubelet[2713]: E1101 00:27:41.563828 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" podUID="a7329f0c-4569-4192-ab57-1ba0d9bc5c3f" Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.810 [WARNING][5593] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0", GenerateName:"calico-apiserver-5fc976b46c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d8b0a34-66bd-4c22-a438-b5e5354489a4", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc976b46c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0743443f0875044b68dbb69c9f46d74c0af46d7d2030f2f49014ba84979f9b54", Pod:"calico-apiserver-5fc976b46c-8vsw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali029ae4fc21b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.811 [INFO][5593] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.811 [INFO][5593] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" iface="eth0" netns="" Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.811 [INFO][5593] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.811 [INFO][5593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.831 [INFO][5602] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" HandleID="k8s-pod-network.d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.832 [INFO][5602] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.832 [INFO][5602] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.844 [WARNING][5602] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" HandleID="k8s-pod-network.d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.844 [INFO][5602] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" HandleID="k8s-pod-network.d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Workload="localhost-k8s-calico--apiserver--5fc976b46c--8vsw4-eth0" Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.846 [INFO][5602] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:41.852225 containerd[1592]: 2025-11-01 00:27:41.849 [INFO][5593] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb" Nov 1 00:27:41.852225 containerd[1592]: time="2025-11-01T00:27:41.852157891Z" level=info msg="TearDown network for sandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\" successfully" Nov 1 00:27:41.995846 containerd[1592]: time="2025-11-01T00:27:41.995751255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:41.995846 containerd[1592]: time="2025-11-01T00:27:41.995856748Z" level=info msg="RemovePodSandbox \"d05aa82f8e40dceef812ba91b215ecea033e6b477020c6a1f3d697d586c713cb\" returns successfully" Nov 1 00:27:41.996575 containerd[1592]: time="2025-11-01T00:27:41.996538707Z" level=info msg="StopPodSandbox for \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\"" Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.040 [WARNING][5621] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0", GenerateName:"calico-apiserver-65fc45bf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8", ResourceVersion:"1232", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fc45bf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c", Pod:"calico-apiserver-65fc45bf6-gxk8k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89c2e5335c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.040 [INFO][5621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.041 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" iface="eth0" netns="" Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.041 [INFO][5621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.041 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.069 [INFO][5630] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" HandleID="k8s-pod-network.c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.071 [INFO][5630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.071 [INFO][5630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.079 [WARNING][5630] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" HandleID="k8s-pod-network.c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.079 [INFO][5630] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" HandleID="k8s-pod-network.c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.081 [INFO][5630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:42.088317 containerd[1592]: 2025-11-01 00:27:42.085 [INFO][5621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:42.090909 containerd[1592]: time="2025-11-01T00:27:42.088370537Z" level=info msg="TearDown network for sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\" successfully" Nov 1 00:27:42.090909 containerd[1592]: time="2025-11-01T00:27:42.088402849Z" level=info msg="StopPodSandbox for \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\" returns successfully" Nov 1 00:27:42.090909 containerd[1592]: time="2025-11-01T00:27:42.088932556Z" level=info msg="RemovePodSandbox for \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\"" Nov 1 00:27:42.090909 containerd[1592]: time="2025-11-01T00:27:42.088967523Z" level=info msg="Forcibly stopping sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\"" Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.132 [WARNING][5648] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0", GenerateName:"calico-apiserver-65fc45bf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8", ResourceVersion:"1232", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fc45bf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1bef3add2227a95f0c2a89ebd9b6be61cef857af448059408e3fe82f4db74d3c", Pod:"calico-apiserver-65fc45bf6-gxk8k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89c2e5335c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.134 [INFO][5648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.135 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" iface="eth0" netns="" Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.135 [INFO][5648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.135 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.169 [INFO][5656] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" HandleID="k8s-pod-network.c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.169 [INFO][5656] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.170 [INFO][5656] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.179 [WARNING][5656] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" HandleID="k8s-pod-network.c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.179 [INFO][5656] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" HandleID="k8s-pod-network.c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Workload="localhost-k8s-calico--apiserver--65fc45bf6--gxk8k-eth0" Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.182 [INFO][5656] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:42.191535 containerd[1592]: 2025-11-01 00:27:42.188 [INFO][5648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a" Nov 1 00:27:42.192157 containerd[1592]: time="2025-11-01T00:27:42.191585341Z" level=info msg="TearDown network for sandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\" successfully" Nov 1 00:27:42.197778 containerd[1592]: time="2025-11-01T00:27:42.197578646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:42.197778 containerd[1592]: time="2025-11-01T00:27:42.197654742Z" level=info msg="RemovePodSandbox \"c7ae21523b55482643e2c47fd21b51e54e57ae37b1e282ad64adecd2f4070b7a\" returns successfully" Nov 1 00:27:42.198330 containerd[1592]: time="2025-11-01T00:27:42.198287857Z" level=info msg="StopPodSandbox for \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\"" Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.250 [WARNING][5674] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mfl96-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd83f5e8-3d82-42fe-a0b0-5807c8a2598f", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc", Pod:"coredns-668d6bf9bc-mfl96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif314535f4fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.250 [INFO][5674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.250 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" iface="eth0" netns="" Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.250 [INFO][5674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.250 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.285 [INFO][5683] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" HandleID="k8s-pod-network.5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.285 [INFO][5683] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.285 [INFO][5683] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.292 [WARNING][5683] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" HandleID="k8s-pod-network.5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.292 [INFO][5683] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" HandleID="k8s-pod-network.5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.294 [INFO][5683] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:42.301889 containerd[1592]: 2025-11-01 00:27:42.298 [INFO][5674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:42.302540 containerd[1592]: time="2025-11-01T00:27:42.301933619Z" level=info msg="TearDown network for sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\" successfully" Nov 1 00:27:42.302540 containerd[1592]: time="2025-11-01T00:27:42.301965039Z" level=info msg="StopPodSandbox for \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\" returns successfully" Nov 1 00:27:42.303273 containerd[1592]: time="2025-11-01T00:27:42.303230288Z" level=info msg="RemovePodSandbox for \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\"" Nov 1 00:27:42.303325 containerd[1592]: time="2025-11-01T00:27:42.303274333Z" level=info msg="Forcibly stopping sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\"" Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.357 [WARNING][5700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mfl96-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd83f5e8-3d82-42fe-a0b0-5807c8a2598f", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1714a4bda9ba95d275fe258bb7443585e015a5b6819248f931124dcd9472cdc", Pod:"coredns-668d6bf9bc-mfl96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif314535f4fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.357 [INFO][5700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.358 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" iface="eth0" netns="" Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.358 [INFO][5700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.358 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.392 [INFO][5710] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" HandleID="k8s-pod-network.5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.393 [INFO][5710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.395 [INFO][5710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.405 [WARNING][5710] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" HandleID="k8s-pod-network.5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.405 [INFO][5710] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" HandleID="k8s-pod-network.5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Workload="localhost-k8s-coredns--668d6bf9bc--mfl96-eth0" Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.408 [INFO][5710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:42.417515 containerd[1592]: 2025-11-01 00:27:42.412 [INFO][5700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010" Nov 1 00:27:42.418186 containerd[1592]: time="2025-11-01T00:27:42.417575087Z" level=info msg="TearDown network for sandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\" successfully" Nov 1 00:27:42.423197 containerd[1592]: time="2025-11-01T00:27:42.423122475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:42.423267 containerd[1592]: time="2025-11-01T00:27:42.423206997Z" level=info msg="RemovePodSandbox \"5f08d0a3a1e2afebfee85bba8187b17241970ea7e57f2ab304de733052f32010\" returns successfully" Nov 1 00:27:42.423807 containerd[1592]: time="2025-11-01T00:27:42.423772202Z" level=info msg="StopPodSandbox for \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\"" Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.467 [WARNING][5728] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" WorkloadEndpoint="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.467 [INFO][5728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.467 [INFO][5728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" iface="eth0" netns="" Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.467 [INFO][5728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.467 [INFO][5728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.499 [INFO][5737] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" HandleID="k8s-pod-network.e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Workload="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.500 [INFO][5737] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.500 [INFO][5737] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.531 [WARNING][5737] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" HandleID="k8s-pod-network.e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Workload="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.531 [INFO][5737] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" HandleID="k8s-pod-network.e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Workload="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.534 [INFO][5737] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:42.541575 containerd[1592]: 2025-11-01 00:27:42.538 [INFO][5728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:42.541575 containerd[1592]: time="2025-11-01T00:27:42.541536165Z" level=info msg="TearDown network for sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\" successfully" Nov 1 00:27:42.541575 containerd[1592]: time="2025-11-01T00:27:42.541569830Z" level=info msg="StopPodSandbox for \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\" returns successfully" Nov 1 00:27:42.542616 containerd[1592]: time="2025-11-01T00:27:42.542196231Z" level=info msg="RemovePodSandbox for \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\"" Nov 1 00:27:42.542616 containerd[1592]: time="2025-11-01T00:27:42.542227442Z" level=info msg="Forcibly stopping sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\"" Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.582 [WARNING][5755] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" WorkloadEndpoint="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.583 [INFO][5755] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.583 [INFO][5755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" iface="eth0" netns="" Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.583 [INFO][5755] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.583 [INFO][5755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.605 [INFO][5764] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" HandleID="k8s-pod-network.e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Workload="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.605 [INFO][5764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.605 [INFO][5764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.611 [WARNING][5764] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" HandleID="k8s-pod-network.e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Workload="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.611 [INFO][5764] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" HandleID="k8s-pod-network.e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Workload="localhost-k8s-whisker--7f7f7788f--4djwn-eth0" Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.613 [INFO][5764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:42.619502 containerd[1592]: 2025-11-01 00:27:42.616 [INFO][5755] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9" Nov 1 00:27:42.619874 containerd[1592]: time="2025-11-01T00:27:42.619554894Z" level=info msg="TearDown network for sandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\" successfully" Nov 1 00:27:42.779633 containerd[1592]: time="2025-11-01T00:27:42.779493352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:42.779633 containerd[1592]: time="2025-11-01T00:27:42.779588545Z" level=info msg="RemovePodSandbox \"e830ade982e87430a4c9d8103c67e25c0388a271f1b55b585584831f33a624c9\" returns successfully" Nov 1 00:27:42.780239 containerd[1592]: time="2025-11-01T00:27:42.780188656Z" level=info msg="StopPodSandbox for \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\"" Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.820 [WARNING][5781] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0", GenerateName:"calico-apiserver-65fc45bf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a7329f0c-4569-4192-ab57-1ba0d9bc5c3f", ResourceVersion:"1171", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fc45bf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a", Pod:"calico-apiserver-65fc45bf6-dz8ms", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefbf0ec9906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.820 [INFO][5781] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.820 [INFO][5781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" iface="eth0" netns="" Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.820 [INFO][5781] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.820 [INFO][5781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.844 [INFO][5790] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" HandleID="k8s-pod-network.f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.844 [INFO][5790] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.844 [INFO][5790] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.852 [WARNING][5790] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" HandleID="k8s-pod-network.f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.852 [INFO][5790] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" HandleID="k8s-pod-network.f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.854 [INFO][5790] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:42.861348 containerd[1592]: 2025-11-01 00:27:42.857 [INFO][5781] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:42.861348 containerd[1592]: time="2025-11-01T00:27:42.861323168Z" level=info msg="TearDown network for sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\" successfully" Nov 1 00:27:42.861348 containerd[1592]: time="2025-11-01T00:27:42.861349318Z" level=info msg="StopPodSandbox for \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\" returns successfully" Nov 1 00:27:42.862074 containerd[1592]: time="2025-11-01T00:27:42.861992923Z" level=info msg="RemovePodSandbox for \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\"" Nov 1 00:27:42.862123 containerd[1592]: time="2025-11-01T00:27:42.862084249Z" level=info msg="Forcibly stopping sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\"" Nov 1 00:27:42.881401 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:40620.service - OpenSSH per-connection server daemon (10.0.0.1:40620). Nov 1 00:27:42.935088 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 40620 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:42.937932 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:42.944587 systemd-logind[1577]: New session 14 of user core. Nov 1 00:27:42.950347 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.908 [WARNING][5808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0", GenerateName:"calico-apiserver-65fc45bf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a7329f0c-4569-4192-ab57-1ba0d9bc5c3f", ResourceVersion:"1171", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65fc45bf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c77955ff0276882fd36da787ef420a795f41620e0033cac5a3cf57718c8a7f1a", Pod:"calico-apiserver-65fc45bf6-dz8ms", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefbf0ec9906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.908 [INFO][5808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.908 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" iface="eth0" netns="" Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.908 [INFO][5808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.908 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.935 [INFO][5818] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" HandleID="k8s-pod-network.f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.935 [INFO][5818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.935 [INFO][5818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.942 [WARNING][5818] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" HandleID="k8s-pod-network.f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.942 [INFO][5818] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" HandleID="k8s-pod-network.f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Workload="localhost-k8s-calico--apiserver--65fc45bf6--dz8ms-eth0" Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.945 [INFO][5818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:42.970825 containerd[1592]: 2025-11-01 00:27:42.950 [INFO][5808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e" Nov 1 00:27:42.971563 containerd[1592]: time="2025-11-01T00:27:42.971499946Z" level=info msg="TearDown network for sandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\" successfully" Nov 1 00:27:43.051867 containerd[1592]: time="2025-11-01T00:27:43.051789094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:43.051867 containerd[1592]: time="2025-11-01T00:27:43.051887774Z" level=info msg="RemovePodSandbox \"f362570be3d3ee6e4abb56156f3350d5fab206f8c712f1e6694da29678f51f1e\" returns successfully" Nov 1 00:27:43.052702 containerd[1592]: time="2025-11-01T00:27:43.052655807Z" level=info msg="StopPodSandbox for \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\"" Nov 1 00:27:43.150850 sshd[5813]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:43.154636 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:40620.service: Deactivated successfully. Nov 1 00:27:43.158719 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:27:43.159627 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:27:43.161183 systemd-logind[1577]: Removed session 14. Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.147 [WARNING][5846] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3bbd1b7-2cec-41ab-97aa-54499c93466d", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085", Pod:"coredns-668d6bf9bc-kjzn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b06f2037f4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.147 [INFO][5846] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.147 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" iface="eth0" netns="" Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.147 [INFO][5846] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.147 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.171 [INFO][5854] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" HandleID="k8s-pod-network.6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.172 [INFO][5854] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.172 [INFO][5854] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.177 [WARNING][5854] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" HandleID="k8s-pod-network.6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.177 [INFO][5854] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" HandleID="k8s-pod-network.6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.179 [INFO][5854] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:43.185336 containerd[1592]: 2025-11-01 00:27:43.182 [INFO][5846] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:43.185879 containerd[1592]: time="2025-11-01T00:27:43.185396975Z" level=info msg="TearDown network for sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\" successfully" Nov 1 00:27:43.185879 containerd[1592]: time="2025-11-01T00:27:43.185430841Z" level=info msg="StopPodSandbox for \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\" returns successfully" Nov 1 00:27:43.186120 containerd[1592]: time="2025-11-01T00:27:43.186081058Z" level=info msg="RemovePodSandbox for \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\"" Nov 1 00:27:43.186190 containerd[1592]: time="2025-11-01T00:27:43.186151152Z" level=info msg="Forcibly stopping sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\"" Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.335 [WARNING][5874] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3bbd1b7-2cec-41ab-97aa-54499c93466d", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8b2f34ea7e91dda7073a8207171948fd852b4934dcb7ea293f66ad5327ad085", Pod:"coredns-668d6bf9bc-kjzn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b06f2037f4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.336 [INFO][5874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.336 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" iface="eth0" netns="" Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.336 [INFO][5874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.336 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.364 [INFO][5882] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" HandleID="k8s-pod-network.6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.364 [INFO][5882] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.364 [INFO][5882] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.718 [WARNING][5882] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" HandleID="k8s-pod-network.6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.718 [INFO][5882] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" HandleID="k8s-pod-network.6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzn2-eth0" Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.719 [INFO][5882] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:43.725240 containerd[1592]: 2025-11-01 00:27:43.722 [INFO][5874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f" Nov 1 00:27:43.726176 containerd[1592]: time="2025-11-01T00:27:43.725282300Z" level=info msg="TearDown network for sandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\" successfully" Nov 1 00:27:43.897606 containerd[1592]: time="2025-11-01T00:27:43.897534314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:27:43.900735 containerd[1592]: time="2025-11-01T00:27:43.900672915Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:43.900735 containerd[1592]: time="2025-11-01T00:27:43.900743150Z" level=info msg="RemovePodSandbox \"6d5b7efa90610db921992a055211b5a08ddb076485f1d878f9dafeb00a951a7f\" returns successfully" Nov 1 00:27:43.902047 containerd[1592]: time="2025-11-01T00:27:43.901948933Z" level=info msg="StopPodSandbox for \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\"" Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.062 [WARNING][5899] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l9nqw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"675112ea-20ac-4b20-b92c-b74dc58b95cd", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 27, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48", Pod:"csi-node-driver-l9nqw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali84ca4c83ae7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.062 [INFO][5899] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.062 [INFO][5899] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" iface="eth0" netns="" Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.062 [INFO][5899] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.062 [INFO][5899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.089 [INFO][5908] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" HandleID="k8s-pod-network.5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.089 [INFO][5908] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.089 [INFO][5908] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.094 [WARNING][5908] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" HandleID="k8s-pod-network.5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.094 [INFO][5908] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" HandleID="k8s-pod-network.5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.096 [INFO][5908] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:44.102419 containerd[1592]: 2025-11-01 00:27:44.099 [INFO][5899] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:44.102419 containerd[1592]: time="2025-11-01T00:27:44.102196871Z" level=info msg="TearDown network for sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\" successfully" Nov 1 00:27:44.102419 containerd[1592]: time="2025-11-01T00:27:44.102228241Z" level=info msg="StopPodSandbox for \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\" returns successfully" Nov 1 00:27:44.102851 containerd[1592]: time="2025-11-01T00:27:44.102697811Z" level=info msg="RemovePodSandbox for \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\"" Nov 1 00:27:44.102851 containerd[1592]: time="2025-11-01T00:27:44.102722308Z" level=info msg="Forcibly stopping sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\"" Nov 1 00:27:44.330953 containerd[1592]: time="2025-11-01T00:27:44.330887971Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:44.415671 containerd[1592]: time="2025-11-01T00:27:44.415580868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:44.415671 containerd[1592]: time="2025-11-01T00:27:44.415600836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:27:44.415937 kubelet[2713]: E1101 00:27:44.415876 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:27:44.416352 kubelet[2713]: E1101 00:27:44.415947 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:27:44.416352 kubelet[2713]: E1101 00:27:44.416278 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpqjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ksf9n_calico-system(41180a49-a14f-492f-9746-dfd093b11440): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:44.416499 containerd[1592]: time="2025-11-01T00:27:44.416335505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:27:44.417718 kubelet[2713]: E1101 00:27:44.417665 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ksf9n" podUID="41180a49-a14f-492f-9746-dfd093b11440" Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.539 [WARNING][5926] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l9nqw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"675112ea-20ac-4b20-b92c-b74dc58b95cd", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 27, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a713033b97f168ddd57a416ccefb61a0073c7bc3d9c58cfe549a0c131506be48", Pod:"csi-node-driver-l9nqw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali84ca4c83ae7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.540 [INFO][5926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.540 [INFO][5926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" iface="eth0" netns="" Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.540 [INFO][5926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.540 [INFO][5926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.562 [INFO][5935] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" HandleID="k8s-pod-network.5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.563 [INFO][5935] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.563 [INFO][5935] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.732 [WARNING][5935] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" HandleID="k8s-pod-network.5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.733 [INFO][5935] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" HandleID="k8s-pod-network.5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Workload="localhost-k8s-csi--node--driver--l9nqw-eth0" Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.734 [INFO][5935] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:44.740941 containerd[1592]: 2025-11-01 00:27:44.737 [INFO][5926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8" Nov 1 00:27:44.740941 containerd[1592]: time="2025-11-01T00:27:44.740875100Z" level=info msg="TearDown network for sandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\" successfully" Nov 1 00:27:44.924380 containerd[1592]: time="2025-11-01T00:27:44.924322946Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:45.124925 containerd[1592]: time="2025-11-01T00:27:45.124746925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:27:45.124925 containerd[1592]: time="2025-11-01T00:27:45.124814435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:27:45.125148 kubelet[2713]: E1101 00:27:45.125004 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:27:45.125148 kubelet[2713]: E1101 00:27:45.125106 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:27:45.125352 kubelet[2713]: E1101 00:27:45.125288 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ct9s2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-847b98fc4d-cw68d_calico-system(65680b46-920b-40e7-93fd-698ef81e20c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:45.126773 kubelet[2713]: E1101 00:27:45.126727 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" podUID="65680b46-920b-40e7-93fd-698ef81e20c8" Nov 1 00:27:45.534140 containerd[1592]: time="2025-11-01T00:27:45.534071761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:45.534140 containerd[1592]: time="2025-11-01T00:27:45.534151314Z" level=info msg="RemovePodSandbox \"5caae547c56659473e7cbbb3afda497a5fb385690ae911da2b44308769a244e8\" returns successfully" Nov 1 00:27:45.534754 containerd[1592]: time="2025-11-01T00:27:45.534715595Z" level=info msg="StopPodSandbox for \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\"" Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.600 [WARNING][5959] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0", GenerateName:"calico-kube-controllers-847b98fc4d-", Namespace:"calico-system", SelfLink:"", UID:"65680b46-920b-40e7-93fd-698ef81e20c8", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 27, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847b98fc4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50", Pod:"calico-kube-controllers-847b98fc4d-cw68d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c7f529bf96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.600 [INFO][5959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.600 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" iface="eth0" netns="" Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.600 [INFO][5959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.600 [INFO][5959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.627 [INFO][5968] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" HandleID="k8s-pod-network.520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.627 [INFO][5968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.627 [INFO][5968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.633 [WARNING][5968] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" HandleID="k8s-pod-network.520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.633 [INFO][5968] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" HandleID="k8s-pod-network.520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.635 [INFO][5968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:45.641987 containerd[1592]: 2025-11-01 00:27:45.638 [INFO][5959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:45.642456 containerd[1592]: time="2025-11-01T00:27:45.642042181Z" level=info msg="TearDown network for sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\" successfully" Nov 1 00:27:45.642456 containerd[1592]: time="2025-11-01T00:27:45.642082197Z" level=info msg="StopPodSandbox for \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\" returns successfully" Nov 1 00:27:45.642608 containerd[1592]: time="2025-11-01T00:27:45.642577917Z" level=info msg="RemovePodSandbox for \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\"" Nov 1 00:27:45.642650 containerd[1592]: time="2025-11-01T00:27:45.642614998Z" level=info msg="Forcibly stopping sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\"" Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.757 [WARNING][5986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0", GenerateName:"calico-kube-controllers-847b98fc4d-", Namespace:"calico-system", SelfLink:"", UID:"65680b46-920b-40e7-93fd-698ef81e20c8", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 27, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847b98fc4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9e3b1743965acc54234b97e7b25346f2372bd602828ca3b2bf95488b0407e50", Pod:"calico-kube-controllers-847b98fc4d-cw68d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c7f529bf96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.758 [INFO][5986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.758 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" iface="eth0" netns="" Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.758 [INFO][5986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.758 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.778 [INFO][5994] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" HandleID="k8s-pod-network.520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.778 [INFO][5994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.778 [INFO][5994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.786 [WARNING][5994] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" HandleID="k8s-pod-network.520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.786 [INFO][5994] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" HandleID="k8s-pod-network.520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Workload="localhost-k8s-calico--kube--controllers--847b98fc4d--cw68d-eth0" Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.788 [INFO][5994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:45.795772 containerd[1592]: 2025-11-01 00:27:45.791 [INFO][5986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a" Nov 1 00:27:45.795772 containerd[1592]: time="2025-11-01T00:27:45.795723180Z" level=info msg="TearDown network for sandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\" successfully" Nov 1 00:27:45.866879 containerd[1592]: time="2025-11-01T00:27:45.866787030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:45.867039 containerd[1592]: time="2025-11-01T00:27:45.866902871Z" level=info msg="RemovePodSandbox \"520d7b921d970f346287c741c3e2a1b1906196dbeaa9792caf5dd8e417b1083a\" returns successfully" Nov 1 00:27:46.897373 containerd[1592]: time="2025-11-01T00:27:46.897114216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:47.368283 containerd[1592]: time="2025-11-01T00:27:47.368125061Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:47.471538 containerd[1592]: time="2025-11-01T00:27:47.471256464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:47.471538 containerd[1592]: time="2025-11-01T00:27:47.471347279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:47.473235 kubelet[2713]: E1101 00:27:47.472928 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:47.473235 kubelet[2713]: E1101 00:27:47.473009 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:47.473235 kubelet[2713]: E1101 00:27:47.473217 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw949,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65fc45bf6-gxk8k_calico-apiserver(8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:47.475044 kubelet[2713]: E1101 00:27:47.474898 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" podUID="8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8" Nov 1 00:27:48.161376 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:50338.service - OpenSSH per-connection server daemon (10.0.0.1:50338). Nov 1 00:27:48.194179 sshd[6010]: Accepted publickey for core from 10.0.0.1 port 50338 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:48.196045 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:48.200412 systemd-logind[1577]: New session 15 of user core. Nov 1 00:27:48.210381 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:27:48.350885 sshd[6010]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:48.355321 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:50338.service: Deactivated successfully. Nov 1 00:27:48.358509 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:27:48.358648 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:27:48.360259 systemd-logind[1577]: Removed session 15. Nov 1 00:27:49.896708 kubelet[2713]: E1101 00:27:49.896644 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67d8fdc769-sqbcf" podUID="38e58461-b45b-46ed-b68d-38eb9fdd6911" Nov 1 00:27:50.896350 kubelet[2713]: E1101 00:27:50.896289 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:51.896622 kubelet[2713]: E1101 00:27:51.896570 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" podUID="4d8b0a34-66bd-4c22-a438-b5e5354489a4" Nov 1 00:27:53.365516 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:36966.service - OpenSSH per-connection server daemon (10.0.0.1:36966). Nov 1 00:27:53.398832 sshd[6028]: Accepted publickey for core from 10.0.0.1 port 36966 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:53.401412 sshd[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:53.407453 systemd-logind[1577]: New session 16 of user core. Nov 1 00:27:53.417388 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:27:53.532605 sshd[6028]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:53.536792 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:36966.service: Deactivated successfully. Nov 1 00:27:53.539691 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:27:53.539810 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:27:53.541084 systemd-logind[1577]: Removed session 16. Nov 1 00:27:53.896473 kubelet[2713]: E1101 00:27:53.896411 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" podUID="a7329f0c-4569-4192-ab57-1ba0d9bc5c3f" Nov 1 00:27:55.897098 kubelet[2713]: E1101 00:27:55.896999 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:27:57.896952 kubelet[2713]: E1101 00:27:57.896915 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:57.897963 kubelet[2713]: E1101 00:27:57.897248 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" podUID="65680b46-920b-40e7-93fd-698ef81e20c8" Nov 1 00:27:58.540502 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:36980.service - OpenSSH per-connection server daemon (10.0.0.1:36980). Nov 1 00:27:58.576359 sshd[6066]: Accepted publickey for core from 10.0.0.1 port 36980 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:58.578329 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:58.583570 systemd-logind[1577]: New session 17 of user core. Nov 1 00:27:58.593704 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:27:58.727438 sshd[6066]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:58.738265 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:36984.service - OpenSSH per-connection server daemon (10.0.0.1:36984). Nov 1 00:27:58.738777 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:36980.service: Deactivated successfully. Nov 1 00:27:58.743045 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:27:58.743975 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:27:58.744990 systemd-logind[1577]: Removed session 17. Nov 1 00:27:58.768996 sshd[6078]: Accepted publickey for core from 10.0.0.1 port 36984 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:58.770743 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:58.775124 systemd-logind[1577]: New session 18 of user core. Nov 1 00:27:58.785348 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:27:58.896382 kubelet[2713]: E1101 00:27:58.896315 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ksf9n" podUID="41180a49-a14f-492f-9746-dfd093b11440" Nov 1 00:27:59.156944 sshd[6078]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:59.166351 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:36992.service - OpenSSH per-connection server daemon (10.0.0.1:36992). Nov 1 00:27:59.167251 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:36984.service: Deactivated successfully. Nov 1 00:27:59.171462 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:27:59.171827 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:27:59.174860 systemd-logind[1577]: Removed session 18. Nov 1 00:27:59.209656 sshd[6091]: Accepted publickey for core from 10.0.0.1 port 36992 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:59.212061 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:59.217111 systemd-logind[1577]: New session 19 of user core. Nov 1 00:27:59.227343 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:27:59.874120 sshd[6091]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:59.884585 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:37002.service - OpenSSH per-connection server daemon (10.0.0.1:37002). Nov 1 00:27:59.886938 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:36992.service: Deactivated successfully. Nov 1 00:27:59.890446 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:27:59.897744 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:27:59.899220 systemd-logind[1577]: Removed session 19. Nov 1 00:27:59.923245 sshd[6111]: Accepted publickey for core from 10.0.0.1 port 37002 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:27:59.925317 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:59.930457 systemd-logind[1577]: New session 20 of user core. Nov 1 00:27:59.941343 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:28:00.196226 sshd[6111]: pam_unix(sshd:session): session closed for user core Nov 1 00:28:00.209239 systemd[1]: Started sshd@20-10.0.0.119:22-10.0.0.1:37018.service - OpenSSH per-connection server daemon (10.0.0.1:37018). Nov 1 00:28:00.209909 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:37002.service: Deactivated successfully. Nov 1 00:28:00.212973 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:28:00.213677 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:28:00.215507 systemd-logind[1577]: Removed session 20. Nov 1 00:28:00.240653 sshd[6126]: Accepted publickey for core from 10.0.0.1 port 37018 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:28:00.242534 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:28:00.247235 systemd-logind[1577]: New session 21 of user core. Nov 1 00:28:00.255326 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:28:00.385037 sshd[6126]: pam_unix(sshd:session): session closed for user core Nov 1 00:28:00.390757 systemd[1]: sshd@20-10.0.0.119:22-10.0.0.1:37018.service: Deactivated successfully. Nov 1 00:28:00.394999 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:28:00.395559 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:28:00.398206 systemd-logind[1577]: Removed session 21. Nov 1 00:28:01.896165 kubelet[2713]: E1101 00:28:01.896101 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" podUID="8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8" Nov 1 00:28:03.896465 containerd[1592]: time="2025-11-01T00:28:03.896362052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:28:04.229654 containerd[1592]: time="2025-11-01T00:28:04.229451835Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:28:04.230886 containerd[1592]: time="2025-11-01T00:28:04.230840094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:28:04.231060 containerd[1592]: time="2025-11-01T00:28:04.230938933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:28:04.231238 kubelet[2713]: E1101 00:28:04.231157 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:28:04.231238 kubelet[2713]: E1101 00:28:04.231233 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:28:04.231800 kubelet[2713]: E1101 00:28:04.231393 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2e2a5e81cfcc4cb2aa655a270487e254,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rghb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67d8fdc769-sqbcf_calico-system(38e58461-b45b-46ed-b68d-38eb9fdd6911): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:28:04.233673 containerd[1592]: time="2025-11-01T00:28:04.233640678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:28:04.560677 containerd[1592]: time="2025-11-01T00:28:04.560516637Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:28:04.561867 containerd[1592]: time="2025-11-01T00:28:04.561833299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:28:04.561940 containerd[1592]: time="2025-11-01T00:28:04.561904074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:28:04.562131 kubelet[2713]: E1101 00:28:04.562075 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:28:04.562207 kubelet[2713]: E1101 00:28:04.562156 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:28:04.562331 kubelet[2713]: E1101 00:28:04.562295 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rghb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67d8fdc769-sqbcf_calico-system(38e58461-b45b-46ed-b68d-38eb9fdd6911): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:28:04.563556 kubelet[2713]: E1101 00:28:04.563456 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67d8fdc769-sqbcf" podUID="38e58461-b45b-46ed-b68d-38eb9fdd6911" Nov 1 00:28:04.897709 containerd[1592]: time="2025-11-01T00:28:04.897525587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:28:05.228791 containerd[1592]: time="2025-11-01T00:28:05.228642727Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:28:05.253235 containerd[1592]: time="2025-11-01T00:28:05.253172060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:28:05.253306 containerd[1592]: time="2025-11-01T00:28:05.253240881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:28:05.253585 kubelet[2713]: E1101 00:28:05.253501 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:28:05.253585 kubelet[2713]: E1101 00:28:05.253566 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:28:05.254641 kubelet[2713]: E1101 00:28:05.253814 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvlq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65fc45bf6-dz8ms_calico-apiserver(a7329f0c-4569-4192-ab57-1ba0d9bc5c3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:28:05.254770 containerd[1592]: time="2025-11-01T00:28:05.253941342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:28:05.255846 kubelet[2713]: E1101 00:28:05.255814 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" podUID="a7329f0c-4569-4192-ab57-1ba0d9bc5c3f" Nov 1 00:28:05.394254 systemd[1]: Started sshd@21-10.0.0.119:22-10.0.0.1:35902.service - OpenSSH per-connection server daemon (10.0.0.1:35902). Nov 1 00:28:05.428038 sshd[6150]: Accepted publickey for core from 10.0.0.1 port 35902 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:28:05.429851 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:28:05.434161 systemd-logind[1577]: New session 22 of user core. Nov 1 00:28:05.446348 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:28:05.609100 sshd[6150]: pam_unix(sshd:session): session closed for user core Nov 1 00:28:05.614543 systemd[1]: sshd@21-10.0.0.119:22-10.0.0.1:35902.service: Deactivated successfully. Nov 1 00:28:05.618000 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:28:05.618943 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:28:05.620445 systemd-logind[1577]: Removed session 22. Nov 1 00:28:05.677498 containerd[1592]: time="2025-11-01T00:28:05.677433144Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:28:05.974891 containerd[1592]: time="2025-11-01T00:28:05.974793546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:28:05.974891 containerd[1592]: time="2025-11-01T00:28:05.974883587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:28:05.975473 kubelet[2713]: E1101 00:28:05.975050 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:28:05.975473 kubelet[2713]: E1101 00:28:05.975099 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:28:05.975473 kubelet[2713]: E1101 00:28:05.975262 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnnng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fc976b46c-8vsw4_calico-apiserver(4d8b0a34-66bd-4c22-a438-b5e5354489a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:28:05.977341 kubelet[2713]: E1101 00:28:05.977097 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" podUID="4d8b0a34-66bd-4c22-a438-b5e5354489a4" Nov 1 00:28:06.895205 kubelet[2713]: E1101 00:28:06.895161 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:28:09.897492 containerd[1592]: time="2025-11-01T00:28:09.897037042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:28:10.251007 containerd[1592]: time="2025-11-01T00:28:10.250846628Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:28:10.321988 containerd[1592]: time="2025-11-01T00:28:10.321902872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:28:10.321988 containerd[1592]: time="2025-11-01T00:28:10.321953608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:28:10.322315 kubelet[2713]: E1101 00:28:10.322263 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:28:10.322729 kubelet[2713]: E1101 00:28:10.322332 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:28:10.322729 kubelet[2713]: E1101 00:28:10.322502 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpqjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ksf9n_calico-system(41180a49-a14f-492f-9746-dfd093b11440): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:28:10.324153 kubelet[2713]: E1101 00:28:10.324089 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ksf9n" podUID="41180a49-a14f-492f-9746-dfd093b11440" Nov 1 00:28:10.621258 systemd[1]: Started sshd@22-10.0.0.119:22-10.0.0.1:35908.service - OpenSSH per-connection server daemon (10.0.0.1:35908). Nov 1 00:28:10.651762 sshd[6167]: Accepted publickey for core from 10.0.0.1 port 35908 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:28:10.653628 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:28:10.658470 systemd-logind[1577]: New session 23 of user core. Nov 1 00:28:10.670387 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:28:10.790422 sshd[6167]: pam_unix(sshd:session): session closed for user core Nov 1 00:28:10.795398 systemd[1]: sshd@22-10.0.0.119:22-10.0.0.1:35908.service: Deactivated successfully. Nov 1 00:28:10.798240 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:28:10.798374 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:28:10.799705 systemd-logind[1577]: Removed session 23. Nov 1 00:28:10.897180 containerd[1592]: time="2025-11-01T00:28:10.897048607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:28:11.211897 containerd[1592]: time="2025-11-01T00:28:11.211739346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:28:11.327754 containerd[1592]: time="2025-11-01T00:28:11.327664282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:28:11.327919 containerd[1592]: time="2025-11-01T00:28:11.327736038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:28:11.328137 kubelet[2713]: E1101 00:28:11.328080 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:28:11.328529 kubelet[2713]: E1101 00:28:11.328151 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:28:11.328529 kubelet[2713]: E1101 00:28:11.328295 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgnb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9nqw_calico-system(675112ea-20ac-4b20-b92c-b74dc58b95cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:28:11.330436 containerd[1592]: time="2025-11-01T00:28:11.330397037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:28:11.808490 containerd[1592]: time="2025-11-01T00:28:11.808416387Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:28:11.809691 containerd[1592]: time="2025-11-01T00:28:11.809653544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:28:11.809789 containerd[1592]: time="2025-11-01T00:28:11.809705914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:28:11.810001 kubelet[2713]: E1101 00:28:11.809926 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:28:11.810092 kubelet[2713]: E1101 00:28:11.810007 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:28:11.810253 kubelet[2713]: E1101 00:28:11.810203 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgnb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l9nqw_calico-system(675112ea-20ac-4b20-b92c-b74dc58b95cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:28:11.811453 kubelet[2713]: E1101 00:28:11.811397 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd" Nov 1 00:28:12.897782 containerd[1592]: time="2025-11-01T00:28:12.897706283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:28:13.211790 containerd[1592]: time="2025-11-01T00:28:13.211644146Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:28:13.242598 containerd[1592]: time="2025-11-01T00:28:13.242524895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:28:13.242691 containerd[1592]: time="2025-11-01T00:28:13.242570321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:28:13.242840 kubelet[2713]: E1101 00:28:13.242784 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:28:13.243329 kubelet[2713]: E1101 00:28:13.242843 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:28:13.243329 kubelet[2713]: E1101 00:28:13.243043 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ct9s2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-847b98fc4d-cw68d_calico-system(65680b46-920b-40e7-93fd-698ef81e20c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:28:13.244270 kubelet[2713]: E1101 00:28:13.244239 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-847b98fc4d-cw68d" podUID="65680b46-920b-40e7-93fd-698ef81e20c8" Nov 1 00:28:13.896349 containerd[1592]: time="2025-11-01T00:28:13.896281174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:28:14.245567 containerd[1592]: time="2025-11-01T00:28:14.245403346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:28:14.246767 containerd[1592]: time="2025-11-01T00:28:14.246715643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:28:14.246835 containerd[1592]: time="2025-11-01T00:28:14.246770338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:28:14.247008 kubelet[2713]: E1101 00:28:14.246959 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:28:14.247347 kubelet[2713]: E1101 00:28:14.247045 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:28:14.247347 kubelet[2713]: E1101 00:28:14.247221 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw949,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65fc45bf6-gxk8k_calico-apiserver(8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:28:14.248417 kubelet[2713]: E1101 00:28:14.248377 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-gxk8k" podUID="8ee5c64c-a2fc-4dbd-b9a2-77e7c3c0f3b8" Nov 1 00:28:15.805342 systemd[1]: Started sshd@23-10.0.0.119:22-10.0.0.1:60718.service - OpenSSH per-connection server daemon (10.0.0.1:60718). Nov 1 00:28:15.834157 sshd[6184]: Accepted publickey for core from 10.0.0.1 port 60718 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:28:15.836351 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:28:15.840612 systemd-logind[1577]: New session 24 of user core. Nov 1 00:28:15.851312 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:28:15.896582 kubelet[2713]: E1101 00:28:15.896507 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65fc45bf6-dz8ms" podUID="a7329f0c-4569-4192-ab57-1ba0d9bc5c3f" Nov 1 00:28:15.897202 kubelet[2713]: E1101 00:28:15.896857 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67d8fdc769-sqbcf" podUID="38e58461-b45b-46ed-b68d-38eb9fdd6911" Nov 1 00:28:16.059608 sshd[6184]: pam_unix(sshd:session): session closed for user core Nov 1 00:28:16.064668 systemd[1]: sshd@23-10.0.0.119:22-10.0.0.1:60718.service: Deactivated successfully. Nov 1 00:28:16.070554 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:28:16.071388 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:28:16.072469 systemd-logind[1577]: Removed session 24. Nov 1 00:28:16.895190 kubelet[2713]: E1101 00:28:16.895075 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:28:17.895787 kubelet[2713]: E1101 00:28:17.895645 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fc976b46c-8vsw4" podUID="4d8b0a34-66bd-4c22-a438-b5e5354489a4" Nov 1 00:28:21.068378 systemd[1]: Started sshd@24-10.0.0.119:22-10.0.0.1:60728.service - OpenSSH per-connection server daemon (10.0.0.1:60728). Nov 1 00:28:21.105868 sshd[6201]: Accepted publickey for core from 10.0.0.1 port 60728 ssh2: RSA SHA256:jhdmrhZXRbwOcBbBZQf03C3Vb9eSOfvUjDz7MOKA8jE Nov 1 00:28:21.108168 sshd[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:28:21.115648 systemd-logind[1577]: New session 25 of user core. Nov 1 00:28:21.119522 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 00:28:21.248857 sshd[6201]: pam_unix(sshd:session): session closed for user core Nov 1 00:28:21.253252 systemd[1]: sshd@24-10.0.0.119:22-10.0.0.1:60728.service: Deactivated successfully. Nov 1 00:28:21.256013 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:28:21.256192 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:28:21.257839 systemd-logind[1577]: Removed session 25. Nov 1 00:28:22.897923 kubelet[2713]: E1101 00:28:22.897849 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l9nqw" podUID="675112ea-20ac-4b20-b92c-b74dc58b95cd"