Oct 31 00:46:41.947767 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 22:59:39 -00 2025 Oct 31 00:46:41.947803 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:46:41.947837 kernel: BIOS-provided physical RAM map: Oct 31 00:46:41.947854 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 31 00:46:41.947865 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 31 00:46:41.947879 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 31 00:46:41.947896 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 31 00:46:41.947912 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 31 00:46:41.947927 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 31 00:46:41.947942 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 31 00:46:41.947978 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 31 00:46:41.947995 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Oct 31 00:46:41.948010 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Oct 31 00:46:41.948025 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Oct 31 00:46:41.948044 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 31 00:46:41.948061 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 31 00:46:41.948083 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 31 00:46:41.948099 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 31 00:46:41.948116 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 31 00:46:41.948134 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 00:46:41.948145 kernel: NX (Execute Disable) protection: active Oct 31 00:46:41.948155 kernel: APIC: Static calls initialized Oct 31 00:46:41.948164 kernel: efi: EFI v2.7 by EDK II Oct 31 00:46:41.948173 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Oct 31 00:46:41.948187 kernel: SMBIOS 2.8 present. Oct 31 00:46:41.948199 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 31 00:46:41.948208 kernel: Hypervisor detected: KVM Oct 31 00:46:41.948221 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 00:46:41.948228 kernel: kvm-clock: using sched offset of 6142617654 cycles Oct 31 00:46:41.948235 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 00:46:41.948242 kernel: tsc: Detected 2794.748 MHz processor Oct 31 00:46:41.948249 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 00:46:41.948257 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 00:46:41.948264 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 31 00:46:41.948271 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 31 00:46:41.948278 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 00:46:41.948287 kernel: Using GB pages for direct mapping Oct 31 00:46:41.948294 kernel: Secure boot disabled Oct 31 00:46:41.948301 kernel: ACPI: Early table checksum verification disabled Oct 31 00:46:41.948308 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 31 00:46:41.948319 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 31 00:46:41.948326 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:46:41.948333 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:46:41.948343 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 31 00:46:41.948350 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:46:41.948357 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:46:41.948364 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:46:41.948372 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:46:41.948379 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 31 00:46:41.948386 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 31 00:46:41.948395 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 31 00:46:41.948403 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 31 00:46:41.948410 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 31 00:46:41.948417 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 31 00:46:41.948424 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 31 00:46:41.948431 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 31 00:46:41.948439 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 31 00:46:41.948446 kernel: No NUMA configuration found Oct 31 00:46:41.948453 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 31 00:46:41.948462 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 31 00:46:41.948470 kernel: Zone ranges: Oct 31 00:46:41.948477 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 00:46:41.948485 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 31 00:46:41.948492 kernel: Normal empty Oct 31 00:46:41.948499 kernel: Movable zone start for each node Oct 31 00:46:41.948506 kernel: Early memory node ranges Oct 31 00:46:41.948513 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 31 00:46:41.948522 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 31 00:46:41.948532 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 31 00:46:41.948544 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 31 00:46:41.948552 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 31 00:46:41.948561 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 31 00:46:41.948570 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 31 00:46:41.948579 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 00:46:41.948588 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 31 00:46:41.948597 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 31 00:46:41.948606 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 00:46:41.948614 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 31 00:46:41.948626 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 31 00:46:41.948635 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 31 00:46:41.948644 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 00:46:41.948653 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 00:46:41.948662 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 00:46:41.948671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 00:46:41.948680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 00:46:41.948689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 00:46:41.948699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 00:46:41.948710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 00:46:41.948719 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 00:46:41.948728 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 31 00:46:41.948737 kernel: TSC deadline timer available Oct 31 00:46:41.948746 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 31 00:46:41.948755 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 31 00:46:41.948764 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 31 00:46:41.948773 kernel: kvm-guest: setup PV sched yield Oct 31 00:46:41.948781 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 31 00:46:41.948791 kernel: Booting paravirtualized kernel on KVM Oct 31 00:46:41.948802 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 00:46:41.948811 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 31 00:46:41.948820 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Oct 31 00:46:41.948840 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Oct 31 00:46:41.948849 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 31 00:46:41.948858 kernel: kvm-guest: PV spinlocks enabled Oct 31 00:46:41.948868 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 00:46:41.948876 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:46:41.948887 kernel: random: crng init done Oct 31 00:46:41.948895 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 00:46:41.948902 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 00:46:41.948909 kernel: Fallback order for Node 0: 0 Oct 31 00:46:41.948917 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 31 00:46:41.948924 kernel: Policy zone: DMA32 Oct 31 00:46:41.948931 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 00:46:41.948939 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 166140K reserved, 0K cma-reserved) Oct 31 00:46:41.948946 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 00:46:41.948972 kernel: ftrace: allocating 37980 entries in 149 pages Oct 31 00:46:41.948982 kernel: ftrace: allocated 149 pages with 4 groups Oct 31 00:46:41.948989 kernel: Dynamic Preempt: voluntary Oct 31 00:46:41.948997 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 00:46:41.949013 kernel: rcu: RCU event tracing is enabled. Oct 31 00:46:41.949022 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 00:46:41.949030 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 00:46:41.949038 kernel: Rude variant of Tasks RCU enabled. Oct 31 00:46:41.949045 kernel: Tracing variant of Tasks RCU enabled. Oct 31 00:46:41.949053 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 00:46:41.949060 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 00:46:41.949068 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 31 00:46:41.949078 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 00:46:41.949086 kernel: Console: colour dummy device 80x25 Oct 31 00:46:41.949093 kernel: printk: console [ttyS0] enabled Oct 31 00:46:41.949101 kernel: ACPI: Core revision 20230628 Oct 31 00:46:41.949109 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 31 00:46:41.949119 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 00:46:41.949126 kernel: x2apic enabled Oct 31 00:46:41.949134 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 00:46:41.949142 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 31 00:46:41.949150 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 31 00:46:41.949157 kernel: kvm-guest: setup PV IPIs Oct 31 00:46:41.949165 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 00:46:41.949172 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 31 00:46:41.949181 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 31 00:46:41.949194 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 00:46:41.949203 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 31 00:46:41.949213 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 31 00:46:41.949222 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 00:46:41.949232 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 00:46:41.949241 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 00:46:41.949251 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 31 00:46:41.949260 kernel: active return thunk: retbleed_return_thunk Oct 31 00:46:41.949272 kernel: RETBleed: Mitigation: untrained return thunk Oct 31 00:46:41.949281 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 00:46:41.949291 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 00:46:41.949300 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 31 00:46:41.949311 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 31 00:46:41.949321 kernel: active return thunk: srso_return_thunk Oct 31 00:46:41.949330 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 31 00:46:41.949340 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 00:46:41.949349 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 00:46:41.949361 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 00:46:41.949370 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 00:46:41.949380 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 00:46:41.949390 kernel: Freeing SMP alternatives memory: 32K Oct 31 00:46:41.949399 kernel: pid_max: default: 32768 minimum: 301 Oct 31 00:46:41.949408 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 31 00:46:41.949418 kernel: landlock: Up and running. Oct 31 00:46:41.949427 kernel: SELinux: Initializing. Oct 31 00:46:41.949436 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:46:41.949449 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:46:41.949458 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 31 00:46:41.949468 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:46:41.949477 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:46:41.949487 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:46:41.949496 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 31 00:46:41.949505 kernel: ... version: 0 Oct 31 00:46:41.949516 kernel: ... bit width: 48 Oct 31 00:46:41.949526 kernel: ... generic registers: 6 Oct 31 00:46:41.949534 kernel: ... value mask: 0000ffffffffffff Oct 31 00:46:41.949542 kernel: ... max period: 00007fffffffffff Oct 31 00:46:41.949549 kernel: ... fixed-purpose events: 0 Oct 31 00:46:41.949557 kernel: ... event mask: 000000000000003f Oct 31 00:46:41.949564 kernel: signal: max sigframe size: 1776 Oct 31 00:46:41.949572 kernel: rcu: Hierarchical SRCU implementation. Oct 31 00:46:41.949580 kernel: rcu: Max phase no-delay instances is 400. Oct 31 00:46:41.949587 kernel: smp: Bringing up secondary CPUs ... Oct 31 00:46:41.949595 kernel: smpboot: x86: Booting SMP configuration: Oct 31 00:46:41.949605 kernel: .... node #0, CPUs: #1 #2 #3 Oct 31 00:46:41.949612 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 00:46:41.949620 kernel: smpboot: Max logical packages: 1 Oct 31 00:46:41.949630 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 31 00:46:41.949641 kernel: devtmpfs: initialized Oct 31 00:46:41.949651 kernel: x86/mm: Memory block size: 128MB Oct 31 00:46:41.949663 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 31 00:46:41.949679 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 31 00:46:41.949690 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 31 00:46:41.949706 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 31 00:46:41.949716 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 31 00:46:41.949727 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 00:46:41.949737 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 00:46:41.949747 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 00:46:41.949758 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 00:46:41.949769 kernel: audit: initializing netlink subsys (disabled) Oct 31 00:46:41.949781 kernel: audit: type=2000 audit(1761871601.244:1): state=initialized audit_enabled=0 res=1 Oct 31 00:46:41.949791 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 00:46:41.949805 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 00:46:41.949816 kernel: cpuidle: using governor menu Oct 31 00:46:41.949838 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 00:46:41.949849 kernel: dca service started, version 1.12.1 Oct 31 00:46:41.949860 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 31 00:46:41.949872 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 31 00:46:41.949883 kernel: PCI: Using configuration type 1 for base access Oct 31 00:46:41.949894 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 00:46:41.949909 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 00:46:41.949920 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 00:46:41.949931 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 00:46:41.949942 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 00:46:41.950019 kernel: ACPI: Added _OSI(Module Device) Oct 31 00:46:41.950031 kernel: ACPI: Added _OSI(Processor Device) Oct 31 00:46:41.950042 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 00:46:41.950054 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 00:46:41.950064 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 31 00:46:41.950078 kernel: ACPI: Interpreter enabled Oct 31 00:46:41.950088 kernel: ACPI: PM: (supports S0 S3 S5) Oct 31 00:46:41.950099 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 00:46:41.950109 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 00:46:41.950120 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 00:46:41.950131 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 00:46:41.950141 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 00:46:41.950383 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 00:46:41.950554 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 31 00:46:41.950713 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 31 00:46:41.950729 kernel: PCI host bridge to bus 0000:00 Oct 31 00:46:41.950915 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 00:46:41.951082 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 00:46:41.951230 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 00:46:41.951403 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 31 00:46:41.951573 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 00:46:41.951720 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 31 00:46:41.951882 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 00:46:41.952096 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 31 00:46:41.952289 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 31 00:46:41.952451 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 31 00:46:41.952608 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 31 00:46:41.952768 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 31 00:46:41.952936 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 31 00:46:41.953117 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 00:46:41.953370 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 31 00:46:41.953603 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 31 00:46:41.953846 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 31 00:46:41.954110 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 31 00:46:41.954370 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 31 00:46:41.954598 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 31 00:46:41.954838 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 31 00:46:41.955094 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 31 00:46:41.955345 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 31 00:46:41.955548 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 31 00:46:41.955719 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 31 00:46:41.955897 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 31 00:46:41.956119 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 31 00:46:41.956303 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 31 00:46:41.956455 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 00:46:41.956631 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 31 00:46:41.956785 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 31 00:46:41.956969 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 31 00:46:41.957147 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 31 00:46:41.957304 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 31 00:46:41.957320 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 00:46:41.957331 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 00:46:41.957342 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 00:46:41.957353 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 00:46:41.957364 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 00:46:41.957380 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 00:46:41.957391 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 00:46:41.957402 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 00:46:41.957412 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 00:46:41.957423 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 00:46:41.957434 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 00:46:41.957445 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 00:46:41.957455 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 00:46:41.957466 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 00:46:41.957480 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 00:46:41.957491 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 00:46:41.957502 kernel: iommu: Default domain type: Translated Oct 31 00:46:41.957513 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 00:46:41.957523 kernel: efivars: Registered efivars operations Oct 31 00:46:41.957533 kernel: PCI: Using ACPI for IRQ routing Oct 31 00:46:41.957543 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 00:46:41.957554 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 31 00:46:41.957564 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 31 00:46:41.957578 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 31 00:46:41.957588 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 31 00:46:41.957740 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 00:46:41.957908 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 00:46:41.958094 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 00:46:41.958112 kernel: vgaarb: loaded Oct 31 00:46:41.958123 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 31 00:46:41.958134 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 31 00:46:41.958144 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 00:46:41.958160 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 00:46:41.958170 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 00:46:41.958181 kernel: pnp: PnP ACPI init Oct 31 00:46:41.958378 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 31 00:46:41.958397 kernel: pnp: PnP ACPI: found 6 devices Oct 31 00:46:41.958409 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 00:46:41.958420 kernel: NET: Registered PF_INET protocol family Oct 31 00:46:41.958430 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 00:46:41.958446 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 00:46:41.958456 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 00:46:41.958467 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 00:46:41.958477 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 31 00:46:41.958488 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 00:46:41.958499 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:46:41.958510 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:46:41.958521 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 00:46:41.958531 kernel: NET: Registered PF_XDP protocol family Oct 31 00:46:41.958698 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 31 00:46:41.958872 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 31 00:46:41.959080 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 00:46:41.959216 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 00:46:41.959350 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 00:46:41.959492 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 31 00:46:41.959638 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 31 00:46:41.959789 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 31 00:46:41.959806 kernel: PCI: CLS 0 bytes, default 64 Oct 31 00:46:41.959817 kernel: Initialise system trusted keyrings Oct 31 00:46:41.959839 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 00:46:41.959850 kernel: Key type asymmetric registered Oct 31 00:46:41.959860 kernel: Asymmetric key parser 'x509' registered Oct 31 00:46:41.959871 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 31 00:46:41.959882 kernel: io scheduler mq-deadline registered Oct 31 00:46:41.959893 kernel: io scheduler kyber registered Oct 31 00:46:41.959907 kernel: io scheduler bfq registered Oct 31 00:46:41.959918 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 00:46:41.959930 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 00:46:41.959941 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 00:46:41.959966 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 00:46:41.959977 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 00:46:41.959988 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 00:46:41.959999 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 00:46:41.960010 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 00:46:41.960025 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 00:46:41.960201 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 31 00:46:41.960219 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 00:46:41.960372 kernel: rtc_cmos 00:04: registered as rtc0 Oct 31 00:46:41.960522 kernel: rtc_cmos 00:04: setting system clock to 2025-10-31T00:46:41 UTC (1761871601) Oct 31 00:46:41.960674 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 31 00:46:41.960691 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 31 00:46:41.960702 kernel: efifb: probing for efifb Oct 31 00:46:41.960718 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Oct 31 00:46:41.960729 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Oct 31 00:46:41.960739 kernel: efifb: scrolling: redraw Oct 31 00:46:41.960750 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Oct 31 00:46:41.960761 kernel: Console: switching to colour frame buffer device 100x37 Oct 31 00:46:41.960772 kernel: fb0: EFI VGA frame buffer device Oct 31 00:46:41.960807 kernel: pstore: Using crash dump compression: deflate Oct 31 00:46:41.960821 kernel: pstore: Registered efi_pstore as persistent store backend Oct 31 00:46:41.960844 kernel: NET: Registered PF_INET6 protocol family Oct 31 00:46:41.960859 kernel: Segment Routing with IPv6 Oct 31 00:46:41.960869 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 00:46:41.960881 kernel: NET: Registered PF_PACKET protocol family Oct 31 00:46:41.960892 kernel: Key type dns_resolver registered Oct 31 00:46:41.960902 kernel: IPI shorthand broadcast: enabled Oct 31 00:46:41.960913 kernel: sched_clock: Marking stable (950002230, 321751151)->(1368708297, -96954916) Oct 31 00:46:41.960924 kernel: registered taskstats version 1 Oct 31 00:46:41.960936 kernel: Loading compiled-in X.509 certificates Oct 31 00:46:41.960961 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 3640cadef2ce00a652278ae302be325ebb54a228' Oct 31 00:46:41.960977 kernel: Key type .fscrypt registered Oct 31 00:46:41.960988 kernel: Key type fscrypt-provisioning registered Oct 31 00:46:41.961001 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 00:46:41.961012 kernel: ima: Allocated hash algorithm: sha1 Oct 31 00:46:41.961023 kernel: ima: No architecture policies found Oct 31 00:46:41.961034 kernel: clk: Disabling unused clocks Oct 31 00:46:41.961045 kernel: Freeing unused kernel image (initmem) memory: 42880K Oct 31 00:46:41.961057 kernel: Write protecting the kernel read-only data: 36864k Oct 31 00:46:41.961068 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Oct 31 00:46:41.961082 kernel: Run /init as init process Oct 31 00:46:41.961093 kernel: with arguments: Oct 31 00:46:41.961104 kernel: /init Oct 31 00:46:41.961115 kernel: with environment: Oct 31 00:46:41.961126 kernel: HOME=/ Oct 31 00:46:41.961136 kernel: TERM=linux Oct 31 00:46:41.961150 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:46:41.961164 systemd[1]: Detected virtualization kvm. Oct 31 00:46:41.961180 systemd[1]: Detected architecture x86-64. Oct 31 00:46:41.961192 systemd[1]: Running in initrd. Oct 31 00:46:41.961206 systemd[1]: No hostname configured, using default hostname. Oct 31 00:46:41.961218 systemd[1]: Hostname set to . Oct 31 00:46:41.961229 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:46:41.961244 systemd[1]: Queued start job for default target initrd.target. Oct 31 00:46:41.961256 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:46:41.961268 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:46:41.961281 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 00:46:41.961292 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:46:41.961304 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 00:46:41.961316 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 00:46:41.961333 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 31 00:46:41.961345 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 31 00:46:41.961357 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:46:41.961369 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:46:41.961381 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:46:41.961392 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:46:41.961404 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:46:41.961416 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:46:41.961430 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:46:41.961442 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:46:41.961454 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 00:46:41.961465 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 31 00:46:41.961477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:46:41.961489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:46:41.961500 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:46:41.961512 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:46:41.961527 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 00:46:41.961538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:46:41.961550 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 00:46:41.961562 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 00:46:41.961574 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:46:41.961586 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:46:41.961598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:46:41.961610 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 00:46:41.961622 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:46:41.961661 systemd-journald[193]: Collecting audit messages is disabled. Oct 31 00:46:41.961689 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 00:46:41.961706 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:46:41.961719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:46:41.961731 systemd-journald[193]: Journal started Oct 31 00:46:41.961756 systemd-journald[193]: Runtime Journal (/run/log/journal/1dcee391d1ec4566abe398ff63b194f5) is 6.0M, max 48.3M, 42.2M free. Oct 31 00:46:41.965093 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:46:41.966388 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:46:41.972330 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:46:41.976920 systemd-modules-load[194]: Inserted module 'overlay' Oct 31 00:46:41.977487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:46:41.982101 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:46:41.996251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:46:41.998322 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:46:42.002787 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:46:42.012077 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 00:46:42.023307 dracut-cmdline[222]: dracut-dracut-053 Oct 31 00:46:42.026330 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:46:42.040979 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 00:46:42.043654 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 31 00:46:42.045264 kernel: Bridge firewalling registered Oct 31 00:46:42.046810 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:46:42.056133 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:46:42.067927 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:46:42.076099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:46:42.110000 systemd-resolved[271]: Positive Trust Anchors: Oct 31 00:46:42.110012 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:46:42.110042 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:46:42.112502 systemd-resolved[271]: Defaulting to hostname 'linux'. Oct 31 00:46:42.113515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:46:42.115051 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:46:42.152991 kernel: SCSI subsystem initialized Oct 31 00:46:42.161992 kernel: Loading iSCSI transport class v2.0-870. Oct 31 00:46:42.172987 kernel: iscsi: registered transport (tcp) Oct 31 00:46:42.195059 kernel: iscsi: registered transport (qla4xxx) Oct 31 00:46:42.195109 kernel: QLogic iSCSI HBA Driver Oct 31 00:46:42.249656 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 00:46:42.257130 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 00:46:42.292491 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 00:46:42.292530 kernel: device-mapper: uevent: version 1.0.3 Oct 31 00:46:42.294144 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 31 00:46:42.333988 kernel: raid6: avx2x4 gen() 30225 MB/s Oct 31 00:46:42.350986 kernel: raid6: avx2x2 gen() 30861 MB/s Oct 31 00:46:42.368732 kernel: raid6: avx2x1 gen() 25768 MB/s Oct 31 00:46:42.368762 kernel: raid6: using algorithm avx2x2 gen() 30861 MB/s Oct 31 00:46:42.386760 kernel: raid6: .... xor() 19749 MB/s, rmw enabled Oct 31 00:46:42.386792 kernel: raid6: using avx2x2 recovery algorithm Oct 31 00:46:42.407985 kernel: xor: automatically using best checksumming function avx Oct 31 00:46:42.570978 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 00:46:42.585851 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:46:42.606313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:46:42.620455 systemd-udevd[414]: Using default interface naming scheme 'v255'. Oct 31 00:46:42.625456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:46:42.643141 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 00:46:42.660590 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Oct 31 00:46:42.696451 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:46:42.711131 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:46:42.817255 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:46:42.828895 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 00:46:42.843427 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 00:46:42.848724 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:46:42.853792 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:46:42.858013 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:46:42.865973 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 31 00:46:42.871762 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 31 00:46:42.872065 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 00:46:42.870224 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 00:46:42.886637 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 00:46:42.886678 kernel: GPT:9289727 != 19775487 Oct 31 00:46:42.886690 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 00:46:42.886700 kernel: GPT:9289727 != 19775487 Oct 31 00:46:42.886710 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 00:46:42.886720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:46:42.886616 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:46:42.895973 kernel: AVX2 version of gcm_enc/dec engaged. Oct 31 00:46:42.896007 kernel: AES CTR mode by8 optimization enabled Oct 31 00:46:42.896979 kernel: libata version 3.00 loaded. Oct 31 00:46:42.911589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:46:42.930570 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 00:46:42.930776 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 00:46:42.930814 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 31 00:46:42.936121 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 00:46:42.936289 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Oct 31 00:46:42.936301 kernel: BTRFS: device fsid 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (457) Oct 31 00:46:42.911814 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:46:42.916068 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:46:42.943510 kernel: scsi host0: ahci Oct 31 00:46:42.944084 kernel: scsi host1: ahci Oct 31 00:46:42.929433 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:46:42.929968 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:46:42.949614 kernel: scsi host2: ahci Oct 31 00:46:42.932387 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:46:42.953414 kernel: scsi host3: ahci Oct 31 00:46:42.953636 kernel: scsi host4: ahci Oct 31 00:46:42.954341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:46:42.967562 kernel: scsi host5: ahci Oct 31 00:46:42.967778 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 31 00:46:42.967805 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 31 00:46:42.967820 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 31 00:46:42.967834 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 31 00:46:42.967849 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 31 00:46:42.967869 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 31 00:46:42.986319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:46:42.999938 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 00:46:43.007176 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 00:46:43.014740 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 00:46:43.021667 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 00:46:43.023869 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 31 00:46:43.041165 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 00:46:43.045145 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:46:43.067244 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:46:43.076077 disk-uuid[567]: Primary Header is updated. Oct 31 00:46:43.076077 disk-uuid[567]: Secondary Entries is updated. Oct 31 00:46:43.076077 disk-uuid[567]: Secondary Header is updated. Oct 31 00:46:43.089000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:46:43.095020 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:46:43.279990 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 00:46:43.280066 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 00:46:43.280992 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 00:46:43.281976 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 00:46:43.283997 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 00:46:43.284994 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 31 00:46:43.286801 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 31 00:46:43.286825 kernel: ata3.00: applying bridge limits Oct 31 00:46:43.288562 kernel: ata3.00: configured for UDMA/100 Oct 31 00:46:43.290965 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 31 00:46:43.335716 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 31 00:46:43.336113 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 00:46:43.354071 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 00:46:44.100790 disk-uuid[578]: The operation has completed successfully. Oct 31 00:46:44.103365 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:46:44.139116 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 00:46:44.139306 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 00:46:44.165166 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 31 00:46:44.170286 sh[593]: Success Oct 31 00:46:44.184990 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 31 00:46:44.223858 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 31 00:46:44.237485 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 31 00:46:44.240623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 31 00:46:44.268604 kernel: BTRFS info (device dm-0): first mount of filesystem 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 Oct 31 00:46:44.268633 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:46:44.268645 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 31 00:46:44.270278 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 00:46:44.271492 kernel: BTRFS info (device dm-0): using free space tree Oct 31 00:46:44.276993 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 31 00:46:44.277653 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 00:46:44.291115 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 00:46:44.294546 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 00:46:44.308540 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:46:44.308583 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:46:44.308601 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:46:44.312983 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:46:44.323388 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 00:46:44.326531 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:46:44.423447 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:46:44.479190 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:46:44.511590 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 00:46:44.523794 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 00:46:44.528681 systemd-networkd[771]: lo: Link UP Oct 31 00:46:44.528685 systemd-networkd[771]: lo: Gained carrier Oct 31 00:46:44.530398 systemd-networkd[771]: Enumeration completed Oct 31 00:46:44.530796 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:46:44.530800 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:46:44.530884 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:46:44.531798 systemd-networkd[771]: eth0: Link UP Oct 31 00:46:44.531803 systemd-networkd[771]: eth0: Gained carrier Oct 31 00:46:44.531814 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:46:44.534272 systemd[1]: Reached target network.target - Network. Oct 31 00:46:44.595029 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:46:44.628036 ignition[774]: Ignition 2.19.0 Oct 31 00:46:44.628049 ignition[774]: Stage: fetch-offline Oct 31 00:46:44.628088 ignition[774]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:46:44.628099 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:46:44.628194 ignition[774]: parsed url from cmdline: "" Oct 31 00:46:44.628198 ignition[774]: no config URL provided Oct 31 00:46:44.628203 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 00:46:44.628212 ignition[774]: no config at "/usr/lib/ignition/user.ign" Oct 31 00:46:44.628244 ignition[774]: op(1): [started] loading QEMU firmware config module Oct 31 00:46:44.628249 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 00:46:44.693713 ignition[774]: op(1): [finished] loading QEMU firmware config module Oct 31 00:46:44.816632 ignition[774]: parsing config with SHA512: 1613ab18bc337d39c99ccfc7315e75ae12dd2e202d558fb0fbae24674157f73ecba86a98f6d17f9bdc801231d508814cea4a65ce50fe6f53b228b86c22abc989 Oct 31 00:46:44.823891 unknown[774]: fetched base config from "system" Oct 31 00:46:44.823916 unknown[774]: fetched user config from "qemu" Oct 31 00:46:44.824466 ignition[774]: fetch-offline: fetch-offline passed Oct 31 00:46:44.827474 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:46:44.824561 ignition[774]: Ignition finished successfully Oct 31 00:46:44.830980 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 00:46:44.841087 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 00:46:44.860058 ignition[786]: Ignition 2.19.0 Oct 31 00:46:44.860070 ignition[786]: Stage: kargs Oct 31 00:46:44.860247 ignition[786]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:46:44.860258 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:46:44.865873 ignition[786]: kargs: kargs passed Oct 31 00:46:44.865925 ignition[786]: Ignition finished successfully Oct 31 00:46:44.871174 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 00:46:44.885105 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 00:46:44.896988 ignition[794]: Ignition 2.19.0 Oct 31 00:46:44.896996 ignition[794]: Stage: disks Oct 31 00:46:44.897148 ignition[794]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:46:44.897159 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:46:44.897931 ignition[794]: disks: disks passed Oct 31 00:46:44.897989 ignition[794]: Ignition finished successfully Oct 31 00:46:44.922020 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 00:46:44.974222 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 00:46:44.977945 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 00:46:44.982080 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:46:44.985495 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:46:44.988916 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:46:45.004174 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 00:46:45.019050 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 31 00:46:45.228827 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 00:46:45.239113 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 00:46:45.349998 kernel: EXT4-fs (vda9): mounted filesystem 044ea9d4-3e15-48f6-be3f-240ec74f6b62 r/w with ordered data mode. Quota mode: none. Oct 31 00:46:45.350861 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 00:46:45.351847 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 00:46:45.363128 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:46:45.366489 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 00:46:45.370744 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 00:46:45.383116 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Oct 31 00:46:45.383147 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:46:45.383174 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:46:45.383187 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:46:45.370812 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 00:46:45.388228 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:46:45.370884 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:46:45.389979 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:46:45.411999 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 00:46:45.421289 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 00:46:45.458326 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 00:46:45.464463 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Oct 31 00:46:45.470105 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 00:46:45.474811 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 00:46:45.560596 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 00:46:45.588151 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 00:46:45.592258 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 00:46:45.596026 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 00:46:45.597002 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:46:45.625657 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 00:46:45.645299 ignition[926]: INFO : Ignition 2.19.0 Oct 31 00:46:45.645299 ignition[926]: INFO : Stage: mount Oct 31 00:46:45.648544 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:46:45.648544 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:46:45.648544 ignition[926]: INFO : mount: mount passed Oct 31 00:46:45.648544 ignition[926]: INFO : Ignition finished successfully Oct 31 00:46:45.658020 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 00:46:45.666336 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 00:46:46.365305 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:46:46.374976 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Oct 31 00:46:46.378278 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:46:46.378361 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:46:46.378376 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:46:46.382985 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:46:46.386218 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:46:46.449407 ignition[955]: INFO : Ignition 2.19.0 Oct 31 00:46:46.449407 ignition[955]: INFO : Stage: files Oct 31 00:46:46.501125 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:46:46.501125 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:46:46.501125 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Oct 31 00:46:46.501125 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 00:46:46.501125 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 00:46:46.564535 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 00:46:46.564535 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 00:46:46.564535 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 00:46:46.564535 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 00:46:46.564535 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 00:46:46.502734 unknown[955]: wrote ssh authorized keys file for user: core Oct 31 00:46:46.559172 systemd-networkd[771]: eth0: Gained IPv6LL Oct 31 00:46:46.601053 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 00:46:46.755085 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 00:46:46.755085 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 00:46:46.761426 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 31 00:46:47.245228 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 00:46:48.204337 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 00:46:48.204337 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 31 00:46:48.212427 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:46:48.212427 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:46:48.212427 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 31 00:46:48.212427 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 31 00:46:48.212427 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:46:48.212427 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:46:48.212427 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 31 00:46:48.212427 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 00:46:48.239134 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:46:48.239134 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:46:48.239134 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 00:46:48.239134 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 31 00:46:48.239134 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 00:46:48.239134 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:46:48.239134 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:46:48.239134 ignition[955]: INFO : files: files passed Oct 31 00:46:48.239134 ignition[955]: INFO : Ignition finished successfully Oct 31 00:46:48.239641 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 00:46:48.272146 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 00:46:48.361185 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 00:46:48.364590 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 00:46:48.375046 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Oct 31 00:46:48.364755 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 00:46:48.384016 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:46:48.384016 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:46:48.377456 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:46:48.392038 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:46:48.380831 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 00:46:48.403130 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 00:46:48.494376 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 00:46:48.494520 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 00:46:48.501056 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 00:46:48.504862 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 00:46:48.521073 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 00:46:48.536224 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 00:46:48.550423 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:46:48.551812 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 00:46:48.570701 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:46:48.575414 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:46:48.580091 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 00:46:48.583744 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 00:46:48.585731 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:46:48.590680 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 00:46:48.594887 systemd[1]: Stopped target basic.target - Basic System. Oct 31 00:46:48.598478 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 00:46:48.602971 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:46:48.606814 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 00:46:48.610775 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 00:46:48.614721 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:46:48.619062 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 00:46:48.622892 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 00:46:48.626595 systemd[1]: Stopped target swap.target - Swaps. Oct 31 00:46:48.629581 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 00:46:48.631401 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:46:48.636540 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:46:48.640473 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:46:48.644668 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 00:46:48.646317 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:46:48.650904 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 00:46:48.652631 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 00:46:48.656649 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 00:46:48.658489 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:46:48.662393 systemd[1]: Stopped target paths.target - Path Units. Oct 31 00:46:48.665435 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 00:46:48.667235 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:46:48.671649 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 00:46:48.674584 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 00:46:48.677683 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 00:46:48.679099 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:46:48.682306 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 00:46:48.683719 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:46:48.687082 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 00:46:48.688980 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:46:48.693114 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 00:46:48.694661 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 00:46:48.709166 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 00:46:48.713201 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 00:46:48.716183 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 00:46:48.718042 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:46:48.722055 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 00:46:48.724014 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:46:48.732743 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 00:46:48.734695 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 00:46:48.747653 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 00:46:48.791549 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 00:46:48.791691 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 00:46:48.796982 ignition[1008]: INFO : Ignition 2.19.0 Oct 31 00:46:48.796982 ignition[1008]: INFO : Stage: umount Oct 31 00:46:48.796982 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:46:48.796982 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:46:48.796982 ignition[1008]: INFO : umount: umount passed Oct 31 00:46:48.796982 ignition[1008]: INFO : Ignition finished successfully Oct 31 00:46:48.797166 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 00:46:48.797327 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 00:46:48.800365 systemd[1]: Stopped target network.target - Network. Oct 31 00:46:48.821592 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 00:46:48.821709 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 00:46:48.825165 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 00:46:48.825221 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 00:46:48.828815 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 00:46:48.828866 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 00:46:48.831874 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 00:46:48.831927 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 00:46:48.835150 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 00:46:48.835206 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 00:46:48.838813 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 00:46:48.842287 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 00:46:48.849060 systemd-networkd[771]: eth0: DHCPv6 lease lost Oct 31 00:46:48.852752 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 00:46:48.852938 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 00:46:48.856676 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 00:46:48.856807 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 00:46:48.862275 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 00:46:48.862350 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:46:48.878216 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 00:46:48.880859 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 00:46:48.880972 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:46:48.886381 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:46:48.886478 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:46:48.890602 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 00:46:48.890674 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 00:46:48.892927 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 00:46:48.893004 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:46:48.897417 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:46:48.915007 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 00:46:48.915182 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 00:46:48.918918 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 00:46:48.919177 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:46:48.924566 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 00:46:48.924655 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 00:46:48.927065 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 00:46:48.927118 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:46:48.931141 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 00:46:48.931215 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:46:48.935732 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 00:46:48.935799 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 00:46:48.940212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:46:48.940280 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:46:48.954275 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 00:46:48.956615 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 00:46:48.956731 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:46:48.961100 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 00:46:48.961175 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:46:48.965523 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 00:46:48.965594 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:46:48.968014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:46:48.968073 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:46:48.972598 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 00:46:48.972747 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 00:46:48.976972 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 00:46:48.990264 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 00:46:49.000836 systemd[1]: Switching root. Oct 31 00:46:49.038597 systemd-journald[193]: Journal stopped Oct 31 00:46:50.531772 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 31 00:46:50.531848 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 00:46:50.531883 kernel: SELinux: policy capability open_perms=1 Oct 31 00:46:50.531897 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 00:46:50.531921 kernel: SELinux: policy capability always_check_network=0 Oct 31 00:46:50.531933 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 00:46:50.531946 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 00:46:50.531984 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 00:46:50.531999 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 00:46:50.532015 kernel: audit: type=1403 audit(1761871609.568:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 00:46:50.532032 systemd[1]: Successfully loaded SELinux policy in 52.809ms. Oct 31 00:46:50.532056 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.143ms. Oct 31 00:46:50.532069 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:46:50.532081 systemd[1]: Detected virtualization kvm. Oct 31 00:46:50.532093 systemd[1]: Detected architecture x86-64. Oct 31 00:46:50.532105 systemd[1]: Detected first boot. Oct 31 00:46:50.532116 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:46:50.532128 zram_generator::config[1054]: No configuration found. Oct 31 00:46:50.532144 systemd[1]: Populated /etc with preset unit settings. Oct 31 00:46:50.532155 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 00:46:50.532167 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 00:46:50.532182 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 00:46:50.532207 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 00:46:50.532223 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 00:46:50.532238 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 00:46:50.532253 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 00:46:50.532274 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 00:46:50.532297 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 00:46:50.532313 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 00:46:50.532329 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 00:46:50.532342 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:46:50.532354 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:46:50.532366 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 00:46:50.532379 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 00:46:50.532394 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 00:46:50.532406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:46:50.532419 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 00:46:50.532431 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:46:50.532442 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 00:46:50.532454 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 00:46:50.532466 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 00:46:50.532478 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 00:46:50.532493 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:46:50.532505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:46:50.532517 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:46:50.532528 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:46:50.532540 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 00:46:50.532552 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 00:46:50.532565 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:46:50.532577 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:46:50.532652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:46:50.532675 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 00:46:50.532687 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 00:46:50.532699 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 00:46:50.532711 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 00:46:50.532723 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:46:50.532735 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 00:46:50.532747 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 00:46:50.532759 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 00:46:50.532771 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 00:46:50.532786 systemd[1]: Reached target machines.target - Containers. Oct 31 00:46:50.532806 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 00:46:50.532818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:46:50.532830 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:46:50.532843 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 00:46:50.532855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:46:50.532866 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:46:50.532878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:46:50.532893 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 00:46:50.532906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:46:50.532918 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 00:46:50.532930 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 00:46:50.532942 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 00:46:50.532973 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 00:46:50.532986 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 00:46:50.532998 kernel: loop: module loaded Oct 31 00:46:50.533010 kernel: fuse: init (API version 7.39) Oct 31 00:46:50.533025 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:46:50.533037 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:46:50.533050 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 00:46:50.533062 kernel: ACPI: bus type drm_connector registered Oct 31 00:46:50.533074 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 00:46:50.533086 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:46:50.533099 systemd[1]: verity-setup.service: Deactivated successfully. Oct 31 00:46:50.533111 systemd[1]: Stopped verity-setup.service. Oct 31 00:46:50.533143 systemd-journald[1135]: Collecting audit messages is disabled. Oct 31 00:46:50.533167 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:46:50.533180 systemd-journald[1135]: Journal started Oct 31 00:46:50.533203 systemd-journald[1135]: Runtime Journal (/run/log/journal/1dcee391d1ec4566abe398ff63b194f5) is 6.0M, max 48.3M, 42.2M free. Oct 31 00:46:50.187389 systemd[1]: Queued start job for default target multi-user.target. Oct 31 00:46:50.205332 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 00:46:50.205925 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 00:46:50.206424 systemd[1]: systemd-journald.service: Consumed 1.282s CPU time. Oct 31 00:46:50.537987 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:46:50.540289 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 00:46:50.542176 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 00:46:50.544104 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 00:46:50.545830 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 00:46:50.547705 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 00:46:50.549582 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 00:46:50.551434 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 00:46:50.553604 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:46:50.555912 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 00:46:50.556160 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 00:46:50.558371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:46:50.558540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:46:50.560678 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:46:50.560854 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:46:50.562851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:46:50.563039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:46:50.565458 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 00:46:50.565653 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 00:46:50.567754 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:46:50.567921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:46:50.570460 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:46:50.572655 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 00:46:50.574938 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 00:46:50.593576 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 00:46:50.619075 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 00:46:50.622311 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 00:46:50.624138 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 00:46:50.624170 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:46:50.626927 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 31 00:46:50.630109 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 00:46:50.633196 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 00:46:50.635036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:46:50.636838 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 00:46:50.639733 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 00:46:50.641677 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:46:50.644108 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 00:46:50.646041 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:46:50.647290 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:46:50.650459 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 00:46:50.654285 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:46:50.658184 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:46:50.663407 systemd-journald[1135]: Time spent on flushing to /var/log/journal/1dcee391d1ec4566abe398ff63b194f5 is 30.379ms for 992 entries. Oct 31 00:46:50.663407 systemd-journald[1135]: System Journal (/var/log/journal/1dcee391d1ec4566abe398ff63b194f5) is 8.0M, max 195.6M, 187.6M free. Oct 31 00:46:50.759473 systemd-journald[1135]: Received client request to flush runtime journal. Oct 31 00:46:50.759517 kernel: loop0: detected capacity change from 0 to 229808 Oct 31 00:46:50.661126 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 00:46:50.665861 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 00:46:50.668798 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 00:46:50.681092 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 31 00:46:50.684086 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 00:46:50.687110 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 00:46:50.708248 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 31 00:46:50.711350 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Oct 31 00:46:50.711365 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Oct 31 00:46:50.759966 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:46:50.763828 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 00:46:50.767225 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:46:50.780875 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 00:46:50.786077 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 00:46:50.785247 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 31 00:46:50.789451 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 00:46:50.790109 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 31 00:46:50.825891 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 00:46:50.834170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:46:50.837169 kernel: loop1: detected capacity change from 0 to 140768 Oct 31 00:46:50.855342 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Oct 31 00:46:50.855743 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Oct 31 00:46:50.862185 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:46:50.934010 kernel: loop2: detected capacity change from 0 to 142488 Oct 31 00:46:50.994978 kernel: loop3: detected capacity change from 0 to 229808 Oct 31 00:46:51.012974 kernel: loop4: detected capacity change from 0 to 140768 Oct 31 00:46:51.024990 kernel: loop5: detected capacity change from 0 to 142488 Oct 31 00:46:51.034090 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 31 00:46:51.035687 (sd-merge)[1195]: Merged extensions into '/usr'. Oct 31 00:46:51.063648 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 00:46:51.063674 systemd[1]: Reloading... Oct 31 00:46:51.171994 zram_generator::config[1221]: No configuration found. Oct 31 00:46:51.241435 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 00:46:51.302361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:46:51.381836 systemd[1]: Reloading finished in 317 ms. Oct 31 00:46:51.418105 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 00:46:51.420643 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 00:46:51.436210 systemd[1]: Starting ensure-sysext.service... Oct 31 00:46:51.438970 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:46:51.448653 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Oct 31 00:46:51.448667 systemd[1]: Reloading... Oct 31 00:46:51.467835 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 00:46:51.468303 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 00:46:51.469410 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 00:46:51.469756 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Oct 31 00:46:51.469848 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Oct 31 00:46:51.473490 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:46:51.473504 systemd-tmpfiles[1259]: Skipping /boot Oct 31 00:46:51.486842 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:46:51.486859 systemd-tmpfiles[1259]: Skipping /boot Oct 31 00:46:51.561065 zram_generator::config[1301]: No configuration found. Oct 31 00:46:51.672492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:46:51.725382 systemd[1]: Reloading finished in 276 ms. Oct 31 00:46:51.745456 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 00:46:51.759661 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:46:51.769526 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:46:51.774171 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 00:46:51.778172 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 00:46:51.784353 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:46:51.791036 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:46:51.795340 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 00:46:51.799937 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:46:51.800254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:46:51.805307 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:46:51.810487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:46:51.816324 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:46:51.820312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:46:51.823299 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 00:46:51.825607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:46:51.828418 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 00:46:51.832189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:46:51.832465 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:46:51.837102 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:46:51.837543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:46:51.838638 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Oct 31 00:46:51.841674 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:46:51.841877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:46:51.845473 augenrules[1350]: No rules Oct 31 00:46:51.848381 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:46:51.858331 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 00:46:51.864512 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:46:51.876789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:46:51.877582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:46:51.885213 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:46:51.890725 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:46:51.895515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:46:51.900514 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:46:51.902665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:46:51.905200 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:46:51.909169 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 00:46:51.912199 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:46:51.912708 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 00:46:51.917732 systemd[1]: Finished ensure-sysext.service. Oct 31 00:46:51.920094 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 00:46:51.923226 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:46:51.923459 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:46:51.926770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:46:51.927562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:46:51.931699 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:46:51.932026 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:46:51.935602 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:46:51.936026 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:46:51.957665 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 00:46:51.978690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:46:51.978761 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:46:51.982165 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 00:46:51.984205 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:46:51.994225 systemd-resolved[1332]: Positive Trust Anchors: Oct 31 00:46:51.994421 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:46:51.994453 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:46:51.998789 systemd-resolved[1332]: Defaulting to hostname 'linux'. Oct 31 00:46:52.000501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:46:52.002977 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 00:46:52.007820 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:46:52.017000 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1367) Oct 31 00:46:52.039998 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 31 00:46:52.058010 kernel: ACPI: button: Power Button [PWRF] Oct 31 00:46:52.130345 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 31 00:46:52.132242 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 00:46:52.134405 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 00:46:52.140854 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 31 00:46:52.145349 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 00:46:52.145528 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 31 00:46:52.145732 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 00:46:52.154889 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 00:46:52.162898 systemd-networkd[1388]: lo: Link UP Oct 31 00:46:52.162912 systemd-networkd[1388]: lo: Gained carrier Oct 31 00:46:52.164460 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 00:46:52.168330 systemd-networkd[1388]: Enumeration completed Oct 31 00:46:52.169425 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:46:52.171793 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:46:52.171807 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:46:52.173305 systemd[1]: Reached target network.target - Network. Oct 31 00:46:52.176417 systemd-networkd[1388]: eth0: Link UP Oct 31 00:46:52.177838 systemd-networkd[1388]: eth0: Gained carrier Oct 31 00:46:52.177968 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:46:52.184160 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 00:46:52.194083 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:46:52.195083 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Oct 31 00:46:53.129257 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 00:46:53.129308 systemd-timesyncd[1403]: Initial clock synchronization to Fri 2025-10-31 00:46:53.129110 UTC. Oct 31 00:46:53.129562 systemd-resolved[1332]: Clock change detected. Flushing caches. Oct 31 00:46:53.143821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:46:53.146468 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 00:46:53.152680 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 00:46:53.205198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:46:53.253917 kernel: kvm_amd: TSC scaling supported Oct 31 00:46:53.254026 kernel: kvm_amd: Nested Virtualization enabled Oct 31 00:46:53.254041 kernel: kvm_amd: Nested Paging enabled Oct 31 00:46:53.254688 kernel: kvm_amd: LBR virtualization supported Oct 31 00:46:53.255578 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 31 00:46:53.256512 kernel: kvm_amd: Virtual GIF supported Oct 31 00:46:53.275442 kernel: EDAC MC: Ver: 3.0.0 Oct 31 00:46:53.310930 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 31 00:46:53.327720 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 31 00:46:53.339548 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:46:53.375189 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 31 00:46:53.378459 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:46:53.380584 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:46:53.382617 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 00:46:53.384937 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 00:46:53.387458 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 00:46:53.389358 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 00:46:53.391544 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 00:46:53.393602 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 00:46:53.393650 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:46:53.395201 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:46:53.397863 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 00:46:53.401945 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 00:46:53.415725 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 00:46:53.419264 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 31 00:46:53.421765 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 00:46:53.423669 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:46:53.425290 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:46:53.426854 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:46:53.426893 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:46:53.428304 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 00:46:53.431442 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 00:46:53.434588 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 00:46:53.438657 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 00:46:53.440544 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 00:46:53.444577 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:46:53.444713 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 00:46:53.447935 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 00:46:53.451542 jq[1432]: false Oct 31 00:46:53.451585 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 00:46:53.460580 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 00:46:53.466979 extend-filesystems[1433]: Found loop3 Oct 31 00:46:53.466979 extend-filesystems[1433]: Found loop4 Oct 31 00:46:53.466979 extend-filesystems[1433]: Found loop5 Oct 31 00:46:53.466979 extend-filesystems[1433]: Found sr0 Oct 31 00:46:53.466979 extend-filesystems[1433]: Found vda Oct 31 00:46:53.466979 extend-filesystems[1433]: Found vda1 Oct 31 00:46:53.466979 extend-filesystems[1433]: Found vda2 Oct 31 00:46:53.466979 extend-filesystems[1433]: Found vda3 Oct 31 00:46:53.466979 extend-filesystems[1433]: Found usr Oct 31 00:46:53.466979 extend-filesystems[1433]: Found vda4 Oct 31 00:46:53.466979 extend-filesystems[1433]: Found vda6 Oct 31 00:46:53.466979 extend-filesystems[1433]: Found vda7 Oct 31 00:46:53.466979 extend-filesystems[1433]: Found vda9 Oct 31 00:46:53.466979 extend-filesystems[1433]: Checking size of /dev/vda9 Oct 31 00:46:53.580022 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 31 00:46:53.580057 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1371) Oct 31 00:46:53.580077 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 31 00:46:53.471496 dbus-daemon[1431]: [system] SELinux support is enabled Oct 31 00:46:53.479030 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 00:46:53.590714 extend-filesystems[1433]: Resized partition /dev/vda9 Oct 31 00:46:53.482262 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 00:46:53.593630 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Oct 31 00:46:53.593630 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 00:46:53.593630 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 00:46:53.593630 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 31 00:46:53.483581 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 00:46:53.623342 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Oct 31 00:46:53.487899 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 00:46:53.626503 jq[1452]: true Oct 31 00:46:53.491447 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 00:46:53.626788 update_engine[1450]: I20251031 00:46:53.588770 1450 main.cc:92] Flatcar Update Engine starting Oct 31 00:46:53.626788 update_engine[1450]: I20251031 00:46:53.592587 1450 update_check_scheduler.cc:74] Next update check in 3m28s Oct 31 00:46:53.496933 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 00:46:53.634195 tar[1456]: linux-amd64/LICENSE Oct 31 00:46:53.634195 tar[1456]: linux-amd64/helm Oct 31 00:46:53.506738 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 31 00:46:53.518053 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 00:46:53.634718 jq[1458]: true Oct 31 00:46:53.518278 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 00:46:53.518704 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 00:46:53.519011 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 00:46:53.526884 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 00:46:53.527102 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 00:46:53.566854 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 00:46:53.568702 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 00:46:53.587855 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 31 00:46:53.593361 systemd[1]: Started update-engine.service - Update Engine. Oct 31 00:46:53.594175 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Oct 31 00:46:53.594196 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 00:46:53.596242 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 00:46:53.596271 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 00:46:53.596724 systemd-logind[1449]: New seat seat0. Oct 31 00:46:53.621452 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 00:46:53.621486 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 00:46:53.633659 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 00:46:53.638294 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 00:46:53.669661 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 00:46:53.892476 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Oct 31 00:46:53.895035 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 00:46:53.900261 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 00:46:53.927342 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 00:46:54.067577 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 00:46:54.078937 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 00:46:54.094128 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 00:46:54.094488 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 00:46:54.105951 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 00:46:54.141471 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 00:46:54.155734 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 00:46:54.158954 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 00:46:54.161240 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 00:46:54.405510 containerd[1459]: time="2025-10-31T00:46:54.405374082Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 31 00:46:54.438078 containerd[1459]: time="2025-10-31T00:46:54.437923494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:46:54.475530 containerd[1459]: time="2025-10-31T00:46:54.475451543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:46:54.475530 containerd[1459]: time="2025-10-31T00:46:54.475525481Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 00:46:54.475678 containerd[1459]: time="2025-10-31T00:46:54.475566388Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 00:46:54.475884 containerd[1459]: time="2025-10-31T00:46:54.475849779Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 31 00:46:54.475884 containerd[1459]: time="2025-10-31T00:46:54.475874476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 31 00:46:54.476031 containerd[1459]: time="2025-10-31T00:46:54.476005291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:46:54.476057 containerd[1459]: time="2025-10-31T00:46:54.476030268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:46:54.476442 containerd[1459]: time="2025-10-31T00:46:54.476417494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:46:54.476475 containerd[1459]: time="2025-10-31T00:46:54.476444855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 00:46:54.476475 containerd[1459]: time="2025-10-31T00:46:54.476461587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:46:54.476475 containerd[1459]: time="2025-10-31T00:46:54.476472467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 00:46:54.476680 containerd[1459]: time="2025-10-31T00:46:54.476619854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:46:54.477043 containerd[1459]: time="2025-10-31T00:46:54.477013382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:46:54.477214 containerd[1459]: time="2025-10-31T00:46:54.477186446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:46:54.477214 containerd[1459]: time="2025-10-31T00:46:54.477208558Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 00:46:54.477427 containerd[1459]: time="2025-10-31T00:46:54.477391901Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 00:46:54.477529 containerd[1459]: time="2025-10-31T00:46:54.477507308Z" level=info msg="metadata content store policy set" policy=shared Oct 31 00:46:54.486817 containerd[1459]: time="2025-10-31T00:46:54.486737648Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 00:46:54.486817 containerd[1459]: time="2025-10-31T00:46:54.486828017Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 00:46:54.487015 containerd[1459]: time="2025-10-31T00:46:54.486852503Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 31 00:46:54.487015 containerd[1459]: time="2025-10-31T00:46:54.486872972Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 31 00:46:54.487015 containerd[1459]: time="2025-10-31T00:46:54.486892338Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 00:46:54.487269 containerd[1459]: time="2025-10-31T00:46:54.487147687Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 00:46:54.487485 containerd[1459]: time="2025-10-31T00:46:54.487453601Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 00:46:54.487653 containerd[1459]: time="2025-10-31T00:46:54.487607930Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 31 00:46:54.487653 containerd[1459]: time="2025-10-31T00:46:54.487652093Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 31 00:46:54.487653 containerd[1459]: time="2025-10-31T00:46:54.487674174Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 31 00:46:54.487813 containerd[1459]: time="2025-10-31T00:46:54.487690395Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 00:46:54.487813 containerd[1459]: time="2025-10-31T00:46:54.487710472Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 00:46:54.487813 containerd[1459]: time="2025-10-31T00:46:54.487726322Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 00:46:54.487813 containerd[1459]: time="2025-10-31T00:46:54.487742803Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 00:46:54.487813 containerd[1459]: time="2025-10-31T00:46:54.487758703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 00:46:54.487813 containerd[1459]: time="2025-10-31T00:46:54.487773871Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 00:46:54.487813 containerd[1459]: time="2025-10-31T00:46:54.487788719Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 00:46:54.487813 containerd[1459]: time="2025-10-31T00:46:54.487799960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487828504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487842179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487854002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487866054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487880221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487892824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487904386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487918693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487932088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487948829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.487971031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.488000887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.488020454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.488047845Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 31 00:46:54.488186 containerd[1459]: time="2025-10-31T00:46:54.488075066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488526 containerd[1459]: time="2025-10-31T00:46:54.488087429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488526 containerd[1459]: time="2025-10-31T00:46:54.488098220Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 00:46:54.488526 containerd[1459]: time="2025-10-31T00:46:54.488174352Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 00:46:54.488526 containerd[1459]: time="2025-10-31T00:46:54.488196995Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 31 00:46:54.488526 containerd[1459]: time="2025-10-31T00:46:54.488208637Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 00:46:54.488526 containerd[1459]: time="2025-10-31T00:46:54.488221421Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 31 00:46:54.488526 containerd[1459]: time="2025-10-31T00:46:54.488230878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.488526 containerd[1459]: time="2025-10-31T00:46:54.488246888Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 31 00:46:54.488526 containerd[1459]: time="2025-10-31T00:46:54.488257759Z" level=info msg="NRI interface is disabled by configuration." Oct 31 00:46:54.488526 containerd[1459]: time="2025-10-31T00:46:54.488270282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 00:46:54.489116 containerd[1459]: time="2025-10-31T00:46:54.488732259Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 00:46:54.489116 containerd[1459]: time="2025-10-31T00:46:54.489110358Z" level=info msg="Connect containerd service" Oct 31 00:46:54.489116 containerd[1459]: time="2025-10-31T00:46:54.489195087Z" level=info msg="using legacy CRI server" Oct 31 00:46:54.489116 containerd[1459]: time="2025-10-31T00:46:54.489210636Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 00:46:54.489542 containerd[1459]: time="2025-10-31T00:46:54.489445777Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 00:46:54.490286 containerd[1459]: time="2025-10-31T00:46:54.490247861Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:46:54.490653 containerd[1459]: time="2025-10-31T00:46:54.490507337Z" level=info msg="Start subscribing containerd event" Oct 31 00:46:54.490709 containerd[1459]: time="2025-10-31T00:46:54.490690401Z" level=info msg="Start recovering state" Oct 31 00:46:54.490836 containerd[1459]: time="2025-10-31T00:46:54.490809224Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 00:46:54.490862 containerd[1459]: time="2025-10-31T00:46:54.490838078Z" level=info msg="Start event monitor" Oct 31 00:46:54.491100 containerd[1459]: time="2025-10-31T00:46:54.490872633Z" level=info msg="Start snapshots syncer" Oct 31 00:46:54.491100 containerd[1459]: time="2025-10-31T00:46:54.490882611Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 00:46:54.491100 containerd[1459]: time="2025-10-31T00:46:54.491043924Z" level=info msg="Start cni network conf syncer for default" Oct 31 00:46:54.491100 containerd[1459]: time="2025-10-31T00:46:54.491062108Z" level=info msg="Start streaming server" Oct 31 00:46:54.491233 containerd[1459]: time="2025-10-31T00:46:54.491212901Z" level=info msg="containerd successfully booted in 0.087149s" Oct 31 00:46:54.491722 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 00:46:54.632839 tar[1456]: linux-amd64/README.md Oct 31 00:46:54.653971 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 00:46:55.044684 systemd-networkd[1388]: eth0: Gained IPv6LL Oct 31 00:46:55.048352 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 00:46:55.060956 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 00:46:55.072802 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 31 00:46:55.076028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:46:55.079024 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 00:46:55.102428 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 00:46:55.103321 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 31 00:46:55.106122 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 00:46:55.150766 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 00:46:56.187261 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 00:46:56.199735 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:59312.service - OpenSSH per-connection server daemon (10.0.0.1:59312). Oct 31 00:46:56.287747 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 59312 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:46:56.290638 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:46:56.302935 systemd-logind[1449]: New session 1 of user core. Oct 31 00:46:56.304861 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 00:46:56.335273 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 00:46:56.376537 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 00:46:56.389741 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 00:46:56.395090 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:46:56.516731 systemd[1544]: Queued start job for default target default.target. Oct 31 00:46:56.540348 systemd[1544]: Created slice app.slice - User Application Slice. Oct 31 00:46:56.540383 systemd[1544]: Reached target paths.target - Paths. Oct 31 00:46:56.540416 systemd[1544]: Reached target timers.target - Timers. Oct 31 00:46:56.542301 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 00:46:56.586074 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 00:46:56.586241 systemd[1544]: Reached target sockets.target - Sockets. Oct 31 00:46:56.586264 systemd[1544]: Reached target basic.target - Basic System. Oct 31 00:46:56.586316 systemd[1544]: Reached target default.target - Main User Target. Oct 31 00:46:56.586360 systemd[1544]: Startup finished in 183ms. Oct 31 00:46:56.586494 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 00:46:56.598523 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 00:46:56.662684 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:59314.service - OpenSSH per-connection server daemon (10.0.0.1:59314). Oct 31 00:46:56.866956 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 59314 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:46:56.870176 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:46:56.875702 systemd-logind[1449]: New session 2 of user core. Oct 31 00:46:56.876834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:46:56.881791 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 00:46:56.883540 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:46:56.884032 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 00:46:56.886361 systemd[1]: Startup finished in 1.095s (kernel) + 7.827s (initrd) + 6.435s (userspace) = 15.358s. Oct 31 00:46:56.978248 sshd[1555]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:56.991149 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:59314.service: Deactivated successfully. Oct 31 00:46:56.993995 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 00:46:56.995002 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Oct 31 00:46:57.002514 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:59330.service - OpenSSH per-connection server daemon (10.0.0.1:59330). Oct 31 00:46:57.004466 systemd-logind[1449]: Removed session 2. Oct 31 00:46:57.037952 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 59330 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:46:57.039931 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:46:57.044576 systemd-logind[1449]: New session 3 of user core. Oct 31 00:46:57.047281 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 00:46:57.100731 sshd[1569]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:57.165232 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:59330.service: Deactivated successfully. Oct 31 00:46:57.167572 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 00:46:57.172156 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Oct 31 00:46:57.182781 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:59332.service - OpenSSH per-connection server daemon (10.0.0.1:59332). Oct 31 00:46:57.183702 systemd-logind[1449]: Removed session 3. Oct 31 00:46:57.214378 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 59332 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:46:57.216467 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:46:57.221808 systemd-logind[1449]: New session 4 of user core. Oct 31 00:46:57.228665 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 00:46:57.284067 sshd[1579]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:57.326018 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:59332.service: Deactivated successfully. Oct 31 00:46:57.330210 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 00:46:57.332231 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Oct 31 00:46:57.338966 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:59338.service - OpenSSH per-connection server daemon (10.0.0.1:59338). Oct 31 00:46:57.342940 systemd-logind[1449]: Removed session 4. Oct 31 00:46:57.376454 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 59338 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:46:57.378788 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:46:57.385621 systemd-logind[1449]: New session 5 of user core. Oct 31 00:46:57.397729 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 00:46:57.466207 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 00:46:57.466556 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:46:57.485710 sudo[1596]: pam_unix(sudo:session): session closed for user root Oct 31 00:46:57.488481 sshd[1586]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:57.500279 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:59338.service: Deactivated successfully. Oct 31 00:46:57.502617 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 00:46:57.504495 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Oct 31 00:46:57.513928 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:59340.service - OpenSSH per-connection server daemon (10.0.0.1:59340). Oct 31 00:46:57.515537 systemd-logind[1449]: Removed session 5. Oct 31 00:46:57.549049 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 59340 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:46:57.551210 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:46:57.556500 systemd-logind[1449]: New session 6 of user core. Oct 31 00:46:57.602785 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 00:46:57.659212 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 00:46:57.659591 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:46:57.664116 sudo[1605]: pam_unix(sudo:session): session closed for user root Oct 31 00:46:57.672192 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 00:46:57.672635 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:46:57.696353 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 31 00:46:57.697423 auditctl[1608]: No rules Oct 31 00:46:57.698778 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 00:46:57.699063 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 31 00:46:57.701216 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:46:57.743996 augenrules[1627]: No rules Oct 31 00:46:57.745549 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:46:57.747081 sudo[1604]: pam_unix(sudo:session): session closed for user root Oct 31 00:46:57.749245 sshd[1601]: pam_unix(sshd:session): session closed for user core Oct 31 00:46:57.767326 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:59340.service: Deactivated successfully. Oct 31 00:46:57.769506 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 00:46:57.771638 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Oct 31 00:46:57.789111 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:59350.service - OpenSSH per-connection server daemon (10.0.0.1:59350). Oct 31 00:46:57.790787 systemd-logind[1449]: Removed session 6. Oct 31 00:46:57.825252 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 59350 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:46:57.827230 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:46:57.832326 systemd-logind[1449]: New session 7 of user core. Oct 31 00:46:57.836553 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 00:46:57.860081 kubelet[1561]: E1031 00:46:57.859997 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:46:57.865046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:46:57.865286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:46:57.865683 systemd[1]: kubelet.service: Consumed 2.539s CPU time. Oct 31 00:46:57.893654 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 00:46:57.894154 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:46:58.216922 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 00:46:58.217960 (dockerd)[1658]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 00:46:58.513860 dockerd[1658]: time="2025-10-31T00:46:58.513677688Z" level=info msg="Starting up" Oct 31 00:46:59.235530 dockerd[1658]: time="2025-10-31T00:46:59.235471629Z" level=info msg="Loading containers: start." Oct 31 00:46:59.356436 kernel: Initializing XFRM netlink socket Oct 31 00:46:59.435600 systemd-networkd[1388]: docker0: Link UP Oct 31 00:46:59.459289 dockerd[1658]: time="2025-10-31T00:46:59.459233082Z" level=info msg="Loading containers: done." Oct 31 00:46:59.476852 dockerd[1658]: time="2025-10-31T00:46:59.476783395Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 00:46:59.477080 dockerd[1658]: time="2025-10-31T00:46:59.476931162Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 31 00:46:59.477106 dockerd[1658]: time="2025-10-31T00:46:59.477084500Z" level=info msg="Daemon has completed initialization" Oct 31 00:46:59.516292 dockerd[1658]: time="2025-10-31T00:46:59.516071424Z" level=info msg="API listen on /run/docker.sock" Oct 31 00:46:59.516992 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 00:47:00.317385 containerd[1459]: time="2025-10-31T00:47:00.317310994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 31 00:47:01.308544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190031185.mount: Deactivated successfully. Oct 31 00:47:03.100371 containerd[1459]: time="2025-10-31T00:47:03.100306835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:03.101030 containerd[1459]: time="2025-10-31T00:47:03.100952987Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 31 00:47:03.102315 containerd[1459]: time="2025-10-31T00:47:03.102272070Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:03.107175 containerd[1459]: time="2025-10-31T00:47:03.107126083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:03.108805 containerd[1459]: time="2025-10-31T00:47:03.108759657Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.791373912s" Oct 31 00:47:03.108850 containerd[1459]: time="2025-10-31T00:47:03.108822264Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 31 00:47:03.109482 containerd[1459]: time="2025-10-31T00:47:03.109451424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 31 00:47:04.714082 containerd[1459]: time="2025-10-31T00:47:04.713997541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:04.714884 containerd[1459]: time="2025-10-31T00:47:04.714801138Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 31 00:47:04.716353 containerd[1459]: time="2025-10-31T00:47:04.716302644Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:04.720751 containerd[1459]: time="2025-10-31T00:47:04.720698678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:04.722953 containerd[1459]: time="2025-10-31T00:47:04.722894586Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.6134049s" Oct 31 00:47:04.723021 containerd[1459]: time="2025-10-31T00:47:04.722952214Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 31 00:47:04.723569 containerd[1459]: time="2025-10-31T00:47:04.723532382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 31 00:47:06.261747 containerd[1459]: time="2025-10-31T00:47:06.261664960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:06.262674 containerd[1459]: time="2025-10-31T00:47:06.262607788Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 31 00:47:06.263882 containerd[1459]: time="2025-10-31T00:47:06.263842343Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:06.267444 containerd[1459]: time="2025-10-31T00:47:06.267376852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:06.268603 containerd[1459]: time="2025-10-31T00:47:06.268558708Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.544991701s" Oct 31 00:47:06.268697 containerd[1459]: time="2025-10-31T00:47:06.268607850Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 31 00:47:06.269282 containerd[1459]: time="2025-10-31T00:47:06.269110423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 31 00:47:07.602497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228042229.mount: Deactivated successfully. Oct 31 00:47:08.115621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 00:47:08.124589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:47:08.453549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:47:08.459231 (kubelet)[1890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:47:08.551608 kubelet[1890]: E1031 00:47:08.551521 1890 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:47:08.558879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:47:08.559107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:47:08.670664 containerd[1459]: time="2025-10-31T00:47:08.670593220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:08.671574 containerd[1459]: time="2025-10-31T00:47:08.671527823Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 31 00:47:08.672677 containerd[1459]: time="2025-10-31T00:47:08.672632304Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:08.674741 containerd[1459]: time="2025-10-31T00:47:08.674703879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:08.675292 containerd[1459]: time="2025-10-31T00:47:08.675254862Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.406107921s" Oct 31 00:47:08.675292 containerd[1459]: time="2025-10-31T00:47:08.675286241Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 31 00:47:08.675782 containerd[1459]: time="2025-10-31T00:47:08.675755281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 31 00:47:09.218226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193771074.mount: Deactivated successfully. Oct 31 00:47:09.917256 containerd[1459]: time="2025-10-31T00:47:09.917192669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:09.918037 containerd[1459]: time="2025-10-31T00:47:09.917977220Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 31 00:47:09.919305 containerd[1459]: time="2025-10-31T00:47:09.919269604Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:09.922141 containerd[1459]: time="2025-10-31T00:47:09.922099912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:09.923377 containerd[1459]: time="2025-10-31T00:47:09.923345047Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.24755969s" Oct 31 00:47:09.923377 containerd[1459]: time="2025-10-31T00:47:09.923373520Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 31 00:47:09.923860 containerd[1459]: time="2025-10-31T00:47:09.923833984Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 00:47:10.419311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775314800.mount: Deactivated successfully. Oct 31 00:47:10.425656 containerd[1459]: time="2025-10-31T00:47:10.425605004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:10.426534 containerd[1459]: time="2025-10-31T00:47:10.426434099Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 31 00:47:10.427750 containerd[1459]: time="2025-10-31T00:47:10.427697388Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:10.430303 containerd[1459]: time="2025-10-31T00:47:10.430267407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:10.431126 containerd[1459]: time="2025-10-31T00:47:10.431099267Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 507.232642ms" Oct 31 00:47:10.431199 containerd[1459]: time="2025-10-31T00:47:10.431133051Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 31 00:47:10.432819 containerd[1459]: time="2025-10-31T00:47:10.432776012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 31 00:47:11.334713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1698452290.mount: Deactivated successfully. Oct 31 00:47:13.494368 containerd[1459]: time="2025-10-31T00:47:13.494312766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:13.495376 containerd[1459]: time="2025-10-31T00:47:13.495331406Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 31 00:47:13.496955 containerd[1459]: time="2025-10-31T00:47:13.496918002Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:13.500094 containerd[1459]: time="2025-10-31T00:47:13.500049174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:13.501346 containerd[1459]: time="2025-10-31T00:47:13.501299449Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.068485406s" Oct 31 00:47:13.501424 containerd[1459]: time="2025-10-31T00:47:13.501351156Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 31 00:47:15.703823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:47:15.715640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:47:15.740397 systemd[1]: Reloading requested from client PID 2040 ('systemctl') (unit session-7.scope)... Oct 31 00:47:15.740431 systemd[1]: Reloading... Oct 31 00:47:15.823441 zram_generator::config[2079]: No configuration found. Oct 31 00:47:16.096892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:47:16.178663 systemd[1]: Reloading finished in 437 ms. Oct 31 00:47:16.230281 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:47:16.234068 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:47:16.234411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:47:16.236544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:47:16.425278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:47:16.431728 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:47:16.686283 kubelet[2129]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:47:16.686283 kubelet[2129]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:47:16.686283 kubelet[2129]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:47:16.687025 kubelet[2129]: I1031 00:47:16.686288 2129 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:47:17.328924 kubelet[2129]: I1031 00:47:17.328835 2129 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 31 00:47:17.328924 kubelet[2129]: I1031 00:47:17.328881 2129 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:47:17.329228 kubelet[2129]: I1031 00:47:17.329125 2129 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 00:47:17.353508 kubelet[2129]: E1031 00:47:17.353425 2129 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 00:47:17.353874 kubelet[2129]: I1031 00:47:17.353603 2129 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:47:17.359780 kubelet[2129]: E1031 00:47:17.359735 2129 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:47:17.359780 kubelet[2129]: I1031 00:47:17.359770 2129 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 00:47:17.365414 kubelet[2129]: I1031 00:47:17.365352 2129 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 00:47:17.365778 kubelet[2129]: I1031 00:47:17.365736 2129 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:47:17.365979 kubelet[2129]: I1031 00:47:17.365768 2129 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 00:47:17.366095 kubelet[2129]: I1031 00:47:17.366000 2129 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:47:17.366095 kubelet[2129]: I1031 00:47:17.366012 2129 container_manager_linux.go:303] "Creating device plugin manager" Oct 31 00:47:17.366929 kubelet[2129]: I1031 00:47:17.366902 2129 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:47:17.370302 kubelet[2129]: I1031 00:47:17.370267 2129 kubelet.go:480] "Attempting to sync node with API server" Oct 31 00:47:17.370416 kubelet[2129]: I1031 00:47:17.370375 2129 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:47:17.370455 kubelet[2129]: I1031 00:47:17.370435 2129 kubelet.go:386] "Adding apiserver pod source" Oct 31 00:47:17.370485 kubelet[2129]: I1031 00:47:17.370460 2129 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:47:17.382166 kubelet[2129]: E1031 00:47:17.381564 2129 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 00:47:17.382166 kubelet[2129]: E1031 00:47:17.381779 2129 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 00:47:17.382432 kubelet[2129]: I1031 00:47:17.382386 2129 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:47:17.384042 kubelet[2129]: I1031 00:47:17.384000 2129 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 00:47:17.384963 kubelet[2129]: W1031 00:47:17.384928 2129 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 00:47:17.388558 kubelet[2129]: I1031 00:47:17.388536 2129 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 00:47:17.388624 kubelet[2129]: I1031 00:47:17.388606 2129 server.go:1289] "Started kubelet" Oct 31 00:47:17.389141 kubelet[2129]: I1031 00:47:17.389063 2129 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:47:17.389389 kubelet[2129]: I1031 00:47:17.389333 2129 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:47:17.389536 kubelet[2129]: I1031 00:47:17.389515 2129 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:47:17.390248 kubelet[2129]: I1031 00:47:17.390209 2129 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:47:17.392541 kubelet[2129]: I1031 00:47:17.391695 2129 server.go:317] "Adding debug handlers to kubelet server" Oct 31 00:47:17.393563 kubelet[2129]: I1031 00:47:17.393538 2129 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:47:17.396198 kubelet[2129]: E1031 00:47:17.395515 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:17.396198 kubelet[2129]: I1031 00:47:17.395551 2129 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 00:47:17.396198 kubelet[2129]: I1031 00:47:17.395735 2129 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 00:47:17.396198 kubelet[2129]: I1031 00:47:17.395819 2129 reconciler.go:26] "Reconciler: start to sync state" Oct 31 00:47:17.396198 kubelet[2129]: E1031 00:47:17.394670 2129 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736ceb5e6195cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 00:47:17.388563917 +0000 UTC m=+0.952098345,LastTimestamp:2025-10-31 00:47:17.388563917 +0000 UTC m=+0.952098345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 00:47:17.396198 kubelet[2129]: E1031 00:47:17.396155 2129 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 00:47:17.397762 kubelet[2129]: I1031 00:47:17.397729 2129 factory.go:223] Registration of the systemd container factory successfully Oct 31 00:47:17.397992 kubelet[2129]: I1031 00:47:17.397944 2129 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:47:17.398311 kubelet[2129]: E1031 00:47:17.398242 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Oct 31 00:47:17.399019 kubelet[2129]: E1031 00:47:17.398996 2129 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:47:17.400152 kubelet[2129]: I1031 00:47:17.400130 2129 factory.go:223] Registration of the containerd container factory successfully Oct 31 00:47:17.411943 kubelet[2129]: I1031 00:47:17.411861 2129 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 31 00:47:17.413440 kubelet[2129]: I1031 00:47:17.413221 2129 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 31 00:47:17.413440 kubelet[2129]: I1031 00:47:17.413254 2129 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 31 00:47:17.413440 kubelet[2129]: I1031 00:47:17.413282 2129 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:47:17.413440 kubelet[2129]: I1031 00:47:17.413295 2129 kubelet.go:2436] "Starting kubelet main sync loop" Oct 31 00:47:17.413440 kubelet[2129]: E1031 00:47:17.413368 2129 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:47:17.418543 kubelet[2129]: I1031 00:47:17.418504 2129 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:47:17.418543 kubelet[2129]: I1031 00:47:17.418532 2129 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:47:17.418629 kubelet[2129]: I1031 00:47:17.418563 2129 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:47:17.421637 kubelet[2129]: E1031 00:47:17.421591 2129 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 00:47:17.496004 kubelet[2129]: E1031 00:47:17.495942 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:17.514420 kubelet[2129]: E1031 00:47:17.514331 2129 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 00:47:17.596752 kubelet[2129]: E1031 00:47:17.596612 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:17.599474 kubelet[2129]: E1031 00:47:17.599434 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Oct 31 00:47:17.696746 kubelet[2129]: E1031 00:47:17.696687 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:17.714939 kubelet[2129]: E1031 00:47:17.714881 2129 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 00:47:17.797439 kubelet[2129]: E1031 00:47:17.797356 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:17.817320 kubelet[2129]: I1031 00:47:17.817236 2129 policy_none.go:49] "None policy: Start" Oct 31 00:47:17.817320 kubelet[2129]: I1031 00:47:17.817284 2129 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 00:47:17.817320 kubelet[2129]: I1031 00:47:17.817314 2129 state_mem.go:35] "Initializing new in-memory state store" Oct 31 00:47:17.826088 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 00:47:17.842264 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 00:47:17.850278 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 00:47:17.863176 kubelet[2129]: E1031 00:47:17.863124 2129 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 00:47:17.863551 kubelet[2129]: I1031 00:47:17.863486 2129 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:47:17.863551 kubelet[2129]: I1031 00:47:17.863523 2129 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:47:17.863860 kubelet[2129]: I1031 00:47:17.863827 2129 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:47:17.864774 kubelet[2129]: E1031 00:47:17.864744 2129 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:47:17.864842 kubelet[2129]: E1031 00:47:17.864800 2129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 00:47:17.965774 kubelet[2129]: I1031 00:47:17.965728 2129 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:47:17.966283 kubelet[2129]: E1031 00:47:17.966220 2129 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Oct 31 00:47:18.000434 kubelet[2129]: E1031 00:47:18.000345 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Oct 31 00:47:18.126806 systemd[1]: Created slice kubepods-burstable-pod4f6656f684e82c74e79f467fbb335444.slice - libcontainer container kubepods-burstable-pod4f6656f684e82c74e79f467fbb335444.slice. Oct 31 00:47:18.138789 kubelet[2129]: E1031 00:47:18.138730 2129 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:47:18.141803 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 31 00:47:18.152796 kubelet[2129]: E1031 00:47:18.152636 2129 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:47:18.155475 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 31 00:47:18.157175 kubelet[2129]: E1031 00:47:18.157143 2129 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:47:18.168522 kubelet[2129]: I1031 00:47:18.168497 2129 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:47:18.168990 kubelet[2129]: E1031 00:47:18.168948 2129 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Oct 31 00:47:18.200506 kubelet[2129]: I1031 00:47:18.200457 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:18.200560 kubelet[2129]: I1031 00:47:18.200508 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:18.200560 kubelet[2129]: I1031 00:47:18.200540 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:47:18.200658 kubelet[2129]: I1031 00:47:18.200565 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f6656f684e82c74e79f467fbb335444-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f6656f684e82c74e79f467fbb335444\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:47:18.200658 kubelet[2129]: I1031 00:47:18.200589 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f6656f684e82c74e79f467fbb335444-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f6656f684e82c74e79f467fbb335444\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:47:18.200658 kubelet[2129]: I1031 00:47:18.200620 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f6656f684e82c74e79f467fbb335444-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f6656f684e82c74e79f467fbb335444\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:47:18.200741 kubelet[2129]: I1031 00:47:18.200646 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:18.200741 kubelet[2129]: I1031 00:47:18.200715 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:18.200792 kubelet[2129]: I1031 00:47:18.200754 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:18.337563 kubelet[2129]: E1031 00:47:18.337504 2129 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 00:47:18.440045 kubelet[2129]: E1031 00:47:18.439893 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:18.441143 containerd[1459]: time="2025-10-31T00:47:18.441069170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f6656f684e82c74e79f467fbb335444,Namespace:kube-system,Attempt:0,}" Oct 31 00:47:18.453303 kubelet[2129]: E1031 00:47:18.453247 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:18.454011 containerd[1459]: time="2025-10-31T00:47:18.453948123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 31 00:47:18.458284 kubelet[2129]: E1031 00:47:18.458237 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:18.458882 containerd[1459]: time="2025-10-31T00:47:18.458840468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 31 00:47:18.573711 kubelet[2129]: I1031 00:47:18.573654 2129 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:47:18.574022 kubelet[2129]: E1031 00:47:18.573997 2129 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Oct 31 00:47:18.801292 kubelet[2129]: E1031 00:47:18.801114 2129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" Oct 31 00:47:18.881493 kubelet[2129]: E1031 00:47:18.881439 2129 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 00:47:18.901768 kubelet[2129]: E1031 00:47:18.901707 2129 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 00:47:18.941886 kubelet[2129]: E1031 00:47:18.941832 2129 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 00:47:19.217149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061725964.mount: Deactivated successfully. Oct 31 00:47:19.227810 containerd[1459]: time="2025-10-31T00:47:19.227761990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:47:19.228837 containerd[1459]: time="2025-10-31T00:47:19.228757106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:47:19.229885 containerd[1459]: time="2025-10-31T00:47:19.229831581Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 31 00:47:19.230809 containerd[1459]: time="2025-10-31T00:47:19.230758940Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:47:19.231623 containerd[1459]: time="2025-10-31T00:47:19.231574470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:47:19.232442 containerd[1459]: time="2025-10-31T00:47:19.232417150Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:47:19.233577 containerd[1459]: time="2025-10-31T00:47:19.233543152Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:47:19.237721 containerd[1459]: time="2025-10-31T00:47:19.237671474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:47:19.238672 containerd[1459]: time="2025-10-31T00:47:19.238636474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 784.556434ms" Oct 31 00:47:19.240174 containerd[1459]: time="2025-10-31T00:47:19.240150593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 781.221138ms" Oct 31 00:47:19.241760 containerd[1459]: time="2025-10-31T00:47:19.241723683Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 800.547032ms" Oct 31 00:47:19.395275 kubelet[2129]: E1031 00:47:19.394980 2129 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 00:47:19.395275 kubelet[2129]: I1031 00:47:19.395034 2129 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:47:19.395450 kubelet[2129]: E1031 00:47:19.395325 2129 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Oct 31 00:47:19.615392 containerd[1459]: time="2025-10-31T00:47:19.615080450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:47:19.615392 containerd[1459]: time="2025-10-31T00:47:19.615160360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:47:19.615917 containerd[1459]: time="2025-10-31T00:47:19.615385773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:19.615917 containerd[1459]: time="2025-10-31T00:47:19.615643406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:19.618913 containerd[1459]: time="2025-10-31T00:47:19.618820935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:47:19.618913 containerd[1459]: time="2025-10-31T00:47:19.618904091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:47:19.619009 containerd[1459]: time="2025-10-31T00:47:19.618919710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:19.619376 containerd[1459]: time="2025-10-31T00:47:19.618885867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:47:19.619376 containerd[1459]: time="2025-10-31T00:47:19.619198784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:47:19.619376 containerd[1459]: time="2025-10-31T00:47:19.619219803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:19.619376 containerd[1459]: time="2025-10-31T00:47:19.619310784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:19.620540 containerd[1459]: time="2025-10-31T00:47:19.619141797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:19.648596 systemd[1]: Started cri-containerd-319e6ef44175e715836f2068f21747b0b4033ef045b9e578c30b4ff0d6375d89.scope - libcontainer container 319e6ef44175e715836f2068f21747b0b4033ef045b9e578c30b4ff0d6375d89. Oct 31 00:47:19.654024 systemd[1]: Started cri-containerd-2c5bc1cd7728b880026c4f372ff9eb8f12345e3a9919a5d8cf9f532b53019196.scope - libcontainer container 2c5bc1cd7728b880026c4f372ff9eb8f12345e3a9919a5d8cf9f532b53019196. Oct 31 00:47:19.656890 systemd[1]: Started cri-containerd-6615fef7357428d92b1f741159b0df5e5933d82b98473d1f8d0f765354063c5b.scope - libcontainer container 6615fef7357428d92b1f741159b0df5e5933d82b98473d1f8d0f765354063c5b. Oct 31 00:47:19.695436 kubelet[2129]: E1031 00:47:19.694384 2129 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736ceb5e6195cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 00:47:17.388563917 +0000 UTC m=+0.952098345,LastTimestamp:2025-10-31 00:47:17.388563917 +0000 UTC m=+0.952098345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 00:47:19.756745 containerd[1459]: time="2025-10-31T00:47:19.756572949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c5bc1cd7728b880026c4f372ff9eb8f12345e3a9919a5d8cf9f532b53019196\"" Oct 31 00:47:19.762930 kubelet[2129]: E1031 00:47:19.762863 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:19.766788 containerd[1459]: time="2025-10-31T00:47:19.766741679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f6656f684e82c74e79f467fbb335444,Namespace:kube-system,Attempt:0,} returns sandbox id \"6615fef7357428d92b1f741159b0df5e5933d82b98473d1f8d0f765354063c5b\"" Oct 31 00:47:19.767452 containerd[1459]: time="2025-10-31T00:47:19.767385056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"319e6ef44175e715836f2068f21747b0b4033ef045b9e578c30b4ff0d6375d89\"" Oct 31 00:47:19.767630 kubelet[2129]: E1031 00:47:19.767603 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:19.769548 kubelet[2129]: E1031 00:47:19.769523 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:19.782939 containerd[1459]: time="2025-10-31T00:47:19.782878201Z" level=info msg="CreateContainer within sandbox \"2c5bc1cd7728b880026c4f372ff9eb8f12345e3a9919a5d8cf9f532b53019196\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 00:47:19.823758 containerd[1459]: time="2025-10-31T00:47:19.823715506Z" level=info msg="CreateContainer within sandbox \"6615fef7357428d92b1f741159b0df5e5933d82b98473d1f8d0f765354063c5b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 00:47:19.827333 containerd[1459]: time="2025-10-31T00:47:19.827284539Z" level=info msg="CreateContainer within sandbox \"319e6ef44175e715836f2068f21747b0b4033ef045b9e578c30b4ff0d6375d89\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 00:47:19.852216 containerd[1459]: time="2025-10-31T00:47:19.852149019Z" level=info msg="CreateContainer within sandbox \"2c5bc1cd7728b880026c4f372ff9eb8f12345e3a9919a5d8cf9f532b53019196\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4517e43a33bac21555751c42e048d9b720f530d3e76910a33eef0d19596fa00\"" Oct 31 00:47:19.853088 containerd[1459]: time="2025-10-31T00:47:19.853030842Z" level=info msg="StartContainer for \"c4517e43a33bac21555751c42e048d9b720f530d3e76910a33eef0d19596fa00\"" Oct 31 00:47:19.854138 containerd[1459]: time="2025-10-31T00:47:19.854090359Z" level=info msg="CreateContainer within sandbox \"6615fef7357428d92b1f741159b0df5e5933d82b98473d1f8d0f765354063c5b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"696e5a5440dd21020d226beff3b74cc4735566db66d078d5b9716b0f3fa9a6d5\"" Oct 31 00:47:19.854800 containerd[1459]: time="2025-10-31T00:47:19.854747582Z" level=info msg="StartContainer for \"696e5a5440dd21020d226beff3b74cc4735566db66d078d5b9716b0f3fa9a6d5\"" Oct 31 00:47:19.859949 containerd[1459]: time="2025-10-31T00:47:19.859895817Z" level=info msg="CreateContainer within sandbox \"319e6ef44175e715836f2068f21747b0b4033ef045b9e578c30b4ff0d6375d89\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec9866fac34f01aea517635950758ec107cb16755cd08e4ed5f4cf6afdc57f55\"" Oct 31 00:47:19.861868 containerd[1459]: time="2025-10-31T00:47:19.861837668Z" level=info msg="StartContainer for \"ec9866fac34f01aea517635950758ec107cb16755cd08e4ed5f4cf6afdc57f55\"" Oct 31 00:47:19.885602 systemd[1]: Started cri-containerd-696e5a5440dd21020d226beff3b74cc4735566db66d078d5b9716b0f3fa9a6d5.scope - libcontainer container 696e5a5440dd21020d226beff3b74cc4735566db66d078d5b9716b0f3fa9a6d5. Oct 31 00:47:19.890147 systemd[1]: Started cri-containerd-c4517e43a33bac21555751c42e048d9b720f530d3e76910a33eef0d19596fa00.scope - libcontainer container c4517e43a33bac21555751c42e048d9b720f530d3e76910a33eef0d19596fa00. Oct 31 00:47:19.895526 systemd[1]: Started cri-containerd-ec9866fac34f01aea517635950758ec107cb16755cd08e4ed5f4cf6afdc57f55.scope - libcontainer container ec9866fac34f01aea517635950758ec107cb16755cd08e4ed5f4cf6afdc57f55. Oct 31 00:47:19.955041 kubelet[2129]: E1031 00:47:19.954982 2129 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 00:47:19.997479 containerd[1459]: time="2025-10-31T00:47:19.995842094Z" level=info msg="StartContainer for \"696e5a5440dd21020d226beff3b74cc4735566db66d078d5b9716b0f3fa9a6d5\" returns successfully" Oct 31 00:47:19.997479 containerd[1459]: time="2025-10-31T00:47:19.996056526Z" level=info msg="StartContainer for \"c4517e43a33bac21555751c42e048d9b720f530d3e76910a33eef0d19596fa00\" returns successfully" Oct 31 00:47:20.009465 containerd[1459]: time="2025-10-31T00:47:20.007978364Z" level=info msg="StartContainer for \"ec9866fac34f01aea517635950758ec107cb16755cd08e4ed5f4cf6afdc57f55\" returns successfully" Oct 31 00:47:20.440290 kubelet[2129]: E1031 00:47:20.440222 2129 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:47:20.440569 kubelet[2129]: E1031 00:47:20.440540 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:20.444839 kubelet[2129]: E1031 00:47:20.444801 2129 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:47:20.444997 kubelet[2129]: E1031 00:47:20.444970 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:20.447787 kubelet[2129]: E1031 00:47:20.447756 2129 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:47:20.447896 kubelet[2129]: E1031 00:47:20.447870 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:20.997638 kubelet[2129]: I1031 00:47:20.997588 2129 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:47:21.242883 kubelet[2129]: E1031 00:47:21.242838 2129 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 00:47:21.389067 kubelet[2129]: I1031 00:47:21.389016 2129 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:47:21.389067 kubelet[2129]: E1031 00:47:21.389056 2129 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 00:47:21.426555 kubelet[2129]: E1031 00:47:21.426505 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:21.442014 kubelet[2129]: E1031 00:47:21.441971 2129 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:47:21.442170 kubelet[2129]: E1031 00:47:21.442085 2129 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:47:21.442170 kubelet[2129]: E1031 00:47:21.442095 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:21.442235 kubelet[2129]: E1031 00:47:21.442188 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:21.486905 kubelet[2129]: E1031 00:47:21.486868 2129 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:47:21.487052 kubelet[2129]: E1031 00:47:21.487026 2129 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:21.527187 kubelet[2129]: E1031 00:47:21.527115 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:21.627767 kubelet[2129]: E1031 00:47:21.627672 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:21.728789 kubelet[2129]: E1031 00:47:21.728624 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:21.829478 kubelet[2129]: E1031 00:47:21.829381 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:21.930600 kubelet[2129]: E1031 00:47:21.930539 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:22.031373 kubelet[2129]: E1031 00:47:22.031181 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:22.132078 kubelet[2129]: E1031 00:47:22.132016 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:22.232652 kubelet[2129]: E1031 00:47:22.232601 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:22.333439 kubelet[2129]: E1031 00:47:22.333289 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:22.433615 kubelet[2129]: E1031 00:47:22.433542 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:22.534711 kubelet[2129]: E1031 00:47:22.534664 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:22.635471 kubelet[2129]: E1031 00:47:22.635384 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:22.736332 kubelet[2129]: E1031 00:47:22.736267 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:22.836895 kubelet[2129]: E1031 00:47:22.836843 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:22.938103 kubelet[2129]: E1031 00:47:22.937973 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.038615 kubelet[2129]: E1031 00:47:23.038547 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.139648 kubelet[2129]: E1031 00:47:23.139570 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.240454 kubelet[2129]: E1031 00:47:23.240247 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.341311 kubelet[2129]: E1031 00:47:23.341181 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.442323 kubelet[2129]: E1031 00:47:23.442255 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.543258 kubelet[2129]: E1031 00:47:23.543093 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.643738 kubelet[2129]: E1031 00:47:23.643695 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.744552 kubelet[2129]: E1031 00:47:23.744485 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.845323 kubelet[2129]: E1031 00:47:23.845160 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.882711 systemd[1]: Reloading requested from client PID 2418 ('systemctl') (unit session-7.scope)... Oct 31 00:47:23.882729 systemd[1]: Reloading... Oct 31 00:47:23.945773 kubelet[2129]: E1031 00:47:23.945573 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:23.958499 zram_generator::config[2458]: No configuration found. Oct 31 00:47:24.046224 kubelet[2129]: E1031 00:47:24.046182 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:24.070114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:47:24.146868 kubelet[2129]: E1031 00:47:24.146783 2129 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:47:24.171360 systemd[1]: Reloading finished in 288 ms. Oct 31 00:47:24.223458 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:47:24.249014 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:47:24.249372 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:47:24.249472 systemd[1]: kubelet.service: Consumed 2.154s CPU time, 132.6M memory peak, 0B memory swap peak. Oct 31 00:47:24.262680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:47:24.448935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:47:24.454847 (kubelet)[2502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:47:24.495359 kubelet[2502]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:47:24.495359 kubelet[2502]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:47:24.495359 kubelet[2502]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:47:24.495752 kubelet[2502]: I1031 00:47:24.495417 2502 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:47:24.502324 kubelet[2502]: I1031 00:47:24.502293 2502 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 31 00:47:24.502324 kubelet[2502]: I1031 00:47:24.502313 2502 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:47:24.502569 kubelet[2502]: I1031 00:47:24.502547 2502 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 00:47:24.503731 kubelet[2502]: I1031 00:47:24.503707 2502 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 00:47:24.505621 kubelet[2502]: I1031 00:47:24.505601 2502 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:47:24.510431 kubelet[2502]: E1031 00:47:24.508395 2502 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:47:24.510431 kubelet[2502]: I1031 00:47:24.508449 2502 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 00:47:24.513656 kubelet[2502]: I1031 00:47:24.513638 2502 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 00:47:24.513954 kubelet[2502]: I1031 00:47:24.513920 2502 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:47:24.514147 kubelet[2502]: I1031 00:47:24.513952 2502 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 00:47:24.514266 kubelet[2502]: I1031 00:47:24.514153 2502 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:47:24.514266 kubelet[2502]: I1031 00:47:24.514173 2502 container_manager_linux.go:303] "Creating device plugin manager" Oct 31 00:47:24.514266 kubelet[2502]: I1031 00:47:24.514237 2502 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:47:24.514478 kubelet[2502]: I1031 00:47:24.514457 2502 kubelet.go:480] "Attempting to sync node with API server" Oct 31 00:47:24.514478 kubelet[2502]: I1031 00:47:24.514474 2502 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:47:24.514554 kubelet[2502]: I1031 00:47:24.514503 2502 kubelet.go:386] "Adding apiserver pod source" Oct 31 00:47:24.514554 kubelet[2502]: I1031 00:47:24.514531 2502 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:47:24.517446 kubelet[2502]: I1031 00:47:24.517396 2502 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:47:24.518615 kubelet[2502]: I1031 00:47:24.518576 2502 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 00:47:24.528168 kubelet[2502]: I1031 00:47:24.527209 2502 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 00:47:24.528577 kubelet[2502]: I1031 00:47:24.528208 2502 server.go:1289] "Started kubelet" Oct 31 00:47:24.528999 kubelet[2502]: I1031 00:47:24.528962 2502 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:47:24.530697 kubelet[2502]: I1031 00:47:24.530467 2502 server.go:317] "Adding debug handlers to kubelet server" Oct 31 00:47:24.532449 kubelet[2502]: I1031 00:47:24.531286 2502 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:47:24.532449 kubelet[2502]: I1031 00:47:24.531655 2502 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:47:24.532900 kubelet[2502]: E1031 00:47:24.532862 2502 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:47:24.533163 kubelet[2502]: I1031 00:47:24.533143 2502 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:47:24.534232 kubelet[2502]: I1031 00:47:24.533604 2502 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:47:24.535629 kubelet[2502]: I1031 00:47:24.535171 2502 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 00:47:24.535878 kubelet[2502]: I1031 00:47:24.535855 2502 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 00:47:24.536135 kubelet[2502]: I1031 00:47:24.536107 2502 reconciler.go:26] "Reconciler: start to sync state" Oct 31 00:47:24.536384 kubelet[2502]: I1031 00:47:24.536357 2502 factory.go:223] Registration of the systemd container factory successfully Oct 31 00:47:24.536672 kubelet[2502]: I1031 00:47:24.536638 2502 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:47:24.538900 kubelet[2502]: I1031 00:47:24.538866 2502 factory.go:223] Registration of the containerd container factory successfully Oct 31 00:47:24.551389 kubelet[2502]: I1031 00:47:24.551333 2502 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 31 00:47:24.552894 kubelet[2502]: I1031 00:47:24.552869 2502 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 31 00:47:24.552894 kubelet[2502]: I1031 00:47:24.552890 2502 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 31 00:47:24.552972 kubelet[2502]: I1031 00:47:24.552912 2502 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:47:24.552972 kubelet[2502]: I1031 00:47:24.552922 2502 kubelet.go:2436] "Starting kubelet main sync loop" Oct 31 00:47:24.553020 kubelet[2502]: E1031 00:47:24.552972 2502 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:47:24.581149 kubelet[2502]: I1031 00:47:24.581104 2502 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:47:24.581149 kubelet[2502]: I1031 00:47:24.581138 2502 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:47:24.581149 kubelet[2502]: I1031 00:47:24.581163 2502 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:47:24.581334 kubelet[2502]: I1031 00:47:24.581295 2502 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 00:47:24.581334 kubelet[2502]: I1031 00:47:24.581308 2502 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 00:47:24.581334 kubelet[2502]: I1031 00:47:24.581324 2502 policy_none.go:49] "None policy: Start" Oct 31 00:47:24.581334 kubelet[2502]: I1031 00:47:24.581333 2502 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 00:47:24.581540 kubelet[2502]: I1031 00:47:24.581343 2502 state_mem.go:35] "Initializing new in-memory state store" Oct 31 00:47:24.581540 kubelet[2502]: I1031 00:47:24.581502 2502 state_mem.go:75] "Updated machine memory state" Oct 31 00:47:24.587165 kubelet[2502]: E1031 00:47:24.586651 2502 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 00:47:24.587165 kubelet[2502]: I1031 00:47:24.586808 2502 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:47:24.587165 kubelet[2502]: I1031 00:47:24.586818 2502 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:47:24.587165 kubelet[2502]: I1031 00:47:24.587105 2502 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:47:24.588380 kubelet[2502]: E1031 00:47:24.588362 2502 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:47:24.654335 kubelet[2502]: I1031 00:47:24.654285 2502 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:47:24.654573 kubelet[2502]: I1031 00:47:24.654554 2502 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:24.654679 kubelet[2502]: I1031 00:47:24.654562 2502 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:47:24.694387 kubelet[2502]: I1031 00:47:24.694349 2502 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:47:24.702027 kubelet[2502]: I1031 00:47:24.701886 2502 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 00:47:24.702027 kubelet[2502]: I1031 00:47:24.701997 2502 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:47:24.837885 kubelet[2502]: I1031 00:47:24.837824 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:24.837885 kubelet[2502]: I1031 00:47:24.837863 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f6656f684e82c74e79f467fbb335444-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f6656f684e82c74e79f467fbb335444\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:47:24.837885 kubelet[2502]: I1031 00:47:24.837884 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f6656f684e82c74e79f467fbb335444-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f6656f684e82c74e79f467fbb335444\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:47:24.837885 kubelet[2502]: I1031 00:47:24.837901 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:24.838150 kubelet[2502]: I1031 00:47:24.837918 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:24.838150 kubelet[2502]: I1031 00:47:24.838014 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:24.838150 kubelet[2502]: I1031 00:47:24.838096 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:24.838231 kubelet[2502]: I1031 00:47:24.838170 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:47:24.838231 kubelet[2502]: I1031 00:47:24.838190 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f6656f684e82c74e79f467fbb335444-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f6656f684e82c74e79f467fbb335444\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:47:24.965767 kubelet[2502]: E1031 00:47:24.965605 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:24.965767 kubelet[2502]: E1031 00:47:24.965697 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:24.965921 kubelet[2502]: E1031 00:47:24.965629 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:25.517025 kubelet[2502]: I1031 00:47:25.516974 2502 apiserver.go:52] "Watching apiserver" Oct 31 00:47:25.536558 kubelet[2502]: I1031 00:47:25.536450 2502 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 00:47:25.565956 kubelet[2502]: I1031 00:47:25.565693 2502 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:25.565956 kubelet[2502]: I1031 00:47:25.565703 2502 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:47:25.565956 kubelet[2502]: I1031 00:47:25.565944 2502 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:47:25.940833 kubelet[2502]: E1031 00:47:25.940593 2502 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:47:25.940833 kubelet[2502]: E1031 00:47:25.940831 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:25.941292 kubelet[2502]: E1031 00:47:25.940973 2502 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:47:25.941292 kubelet[2502]: E1031 00:47:25.941175 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:25.941439 kubelet[2502]: E1031 00:47:25.941320 2502 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 00:47:25.941521 kubelet[2502]: E1031 00:47:25.941454 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:25.982441 kubelet[2502]: I1031 00:47:25.982327 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9822843300000001 podStartE2EDuration="1.98228433s" podCreationTimestamp="2025-10-31 00:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:47:25.971536605 +0000 UTC m=+1.511084785" watchObservedRunningTime="2025-10-31 00:47:25.98228433 +0000 UTC m=+1.521832490" Oct 31 00:47:26.014306 kubelet[2502]: I1031 00:47:26.014042 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.014010819 podStartE2EDuration="2.014010819s" podCreationTimestamp="2025-10-31 00:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:47:25.986053779 +0000 UTC m=+1.525601950" watchObservedRunningTime="2025-10-31 00:47:26.014010819 +0000 UTC m=+1.553558979" Oct 31 00:47:26.026492 kubelet[2502]: I1031 00:47:26.026059 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.02603712 podStartE2EDuration="2.02603712s" podCreationTimestamp="2025-10-31 00:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:47:26.015038139 +0000 UTC m=+1.554586299" watchObservedRunningTime="2025-10-31 00:47:26.02603712 +0000 UTC m=+1.565585280" Oct 31 00:47:26.566804 kubelet[2502]: E1031 00:47:26.566723 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:26.567425 kubelet[2502]: E1031 00:47:26.566913 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:26.567425 kubelet[2502]: E1031 00:47:26.566986 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:27.568233 kubelet[2502]: E1031 00:47:27.568170 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:30.557847 kubelet[2502]: I1031 00:47:30.557811 2502 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 00:47:30.558363 containerd[1459]: time="2025-10-31T00:47:30.558280246Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 00:47:30.558648 kubelet[2502]: I1031 00:47:30.558497 2502 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 00:47:31.627565 systemd[1]: Created slice kubepods-besteffort-pod7d0cbcd5_f409_489b_b531_c22c0338840a.slice - libcontainer container kubepods-besteffort-pod7d0cbcd5_f409_489b_b531_c22c0338840a.slice. Oct 31 00:47:31.667001 kubelet[2502]: E1031 00:47:31.666919 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:31.679439 kubelet[2502]: I1031 00:47:31.677716 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d0cbcd5-f409-489b-b531-c22c0338840a-xtables-lock\") pod \"kube-proxy-8df44\" (UID: \"7d0cbcd5-f409-489b-b531-c22c0338840a\") " pod="kube-system/kube-proxy-8df44" Oct 31 00:47:31.679439 kubelet[2502]: I1031 00:47:31.677762 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d0cbcd5-f409-489b-b531-c22c0338840a-lib-modules\") pod \"kube-proxy-8df44\" (UID: \"7d0cbcd5-f409-489b-b531-c22c0338840a\") " pod="kube-system/kube-proxy-8df44" Oct 31 00:47:31.679439 kubelet[2502]: I1031 00:47:31.677782 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d0cbcd5-f409-489b-b531-c22c0338840a-kube-proxy\") pod \"kube-proxy-8df44\" (UID: \"7d0cbcd5-f409-489b-b531-c22c0338840a\") " pod="kube-system/kube-proxy-8df44" Oct 31 00:47:31.679439 kubelet[2502]: I1031 00:47:31.677799 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g6h6\" (UniqueName: \"kubernetes.io/projected/7d0cbcd5-f409-489b-b531-c22c0338840a-kube-api-access-6g6h6\") pod \"kube-proxy-8df44\" (UID: \"7d0cbcd5-f409-489b-b531-c22c0338840a\") " pod="kube-system/kube-proxy-8df44" Oct 31 00:47:31.792094 systemd[1]: Created slice kubepods-besteffort-podc34f3c9d_cb0e_461c_822a_f8d538c3dddc.slice - libcontainer container kubepods-besteffort-podc34f3c9d_cb0e_461c_822a_f8d538c3dddc.slice. Oct 31 00:47:31.879851 kubelet[2502]: I1031 00:47:31.879649 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c34f3c9d-cb0e-461c-822a-f8d538c3dddc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-crxzd\" (UID: \"c34f3c9d-cb0e-461c-822a-f8d538c3dddc\") " pod="tigera-operator/tigera-operator-7dcd859c48-crxzd" Oct 31 00:47:31.879851 kubelet[2502]: I1031 00:47:31.879709 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xxsf\" (UniqueName: \"kubernetes.io/projected/c34f3c9d-cb0e-461c-822a-f8d538c3dddc-kube-api-access-4xxsf\") pod \"tigera-operator-7dcd859c48-crxzd\" (UID: \"c34f3c9d-cb0e-461c-822a-f8d538c3dddc\") " pod="tigera-operator/tigera-operator-7dcd859c48-crxzd" Oct 31 00:47:31.936295 kubelet[2502]: E1031 00:47:31.936246 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:31.937010 containerd[1459]: time="2025-10-31T00:47:31.936948641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8df44,Uid:7d0cbcd5-f409-489b-b531-c22c0338840a,Namespace:kube-system,Attempt:0,}" Oct 31 00:47:31.966792 containerd[1459]: time="2025-10-31T00:47:31.966652289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:47:31.966792 containerd[1459]: time="2025-10-31T00:47:31.966738715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:47:31.966792 containerd[1459]: time="2025-10-31T00:47:31.966750126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:31.966979 containerd[1459]: time="2025-10-31T00:47:31.966836371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:31.995700 systemd[1]: Started cri-containerd-295be73bb15ddea7e78c3c8b59e454bf898fba2fd2b8f2ddbb5b943e1d1697f6.scope - libcontainer container 295be73bb15ddea7e78c3c8b59e454bf898fba2fd2b8f2ddbb5b943e1d1697f6. Oct 31 00:47:32.027629 containerd[1459]: time="2025-10-31T00:47:32.027587785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8df44,Uid:7d0cbcd5-f409-489b-b531-c22c0338840a,Namespace:kube-system,Attempt:0,} returns sandbox id \"295be73bb15ddea7e78c3c8b59e454bf898fba2fd2b8f2ddbb5b943e1d1697f6\"" Oct 31 00:47:32.028881 kubelet[2502]: E1031 00:47:32.028842 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:32.035425 containerd[1459]: time="2025-10-31T00:47:32.035370446Z" level=info msg="CreateContainer within sandbox \"295be73bb15ddea7e78c3c8b59e454bf898fba2fd2b8f2ddbb5b943e1d1697f6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 00:47:32.083132 containerd[1459]: time="2025-10-31T00:47:32.083061114Z" level=info msg="CreateContainer within sandbox \"295be73bb15ddea7e78c3c8b59e454bf898fba2fd2b8f2ddbb5b943e1d1697f6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1787394243b1c7bbf5dfe0a8e11b47b1a7e58cd97e52f9b255ef273e2ef7c888\"" Oct 31 00:47:32.083883 containerd[1459]: time="2025-10-31T00:47:32.083854179Z" level=info msg="StartContainer for \"1787394243b1c7bbf5dfe0a8e11b47b1a7e58cd97e52f9b255ef273e2ef7c888\"" Oct 31 00:47:32.108675 containerd[1459]: time="2025-10-31T00:47:32.107970301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-crxzd,Uid:c34f3c9d-cb0e-461c-822a-f8d538c3dddc,Namespace:tigera-operator,Attempt:0,}" Oct 31 00:47:32.125663 systemd[1]: Started cri-containerd-1787394243b1c7bbf5dfe0a8e11b47b1a7e58cd97e52f9b255ef273e2ef7c888.scope - libcontainer container 1787394243b1c7bbf5dfe0a8e11b47b1a7e58cd97e52f9b255ef273e2ef7c888. Oct 31 00:47:32.136798 containerd[1459]: time="2025-10-31T00:47:32.136597272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:47:32.136798 containerd[1459]: time="2025-10-31T00:47:32.136670000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:47:32.136914 containerd[1459]: time="2025-10-31T00:47:32.136761525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:32.137596 containerd[1459]: time="2025-10-31T00:47:32.137516818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:32.160624 systemd[1]: Started cri-containerd-17a10b8cc93578f8f8f4ec547dbb0443353b1a1d94cf14df26b6d8fcde92777f.scope - libcontainer container 17a10b8cc93578f8f8f4ec547dbb0443353b1a1d94cf14df26b6d8fcde92777f. Oct 31 00:47:32.166298 containerd[1459]: time="2025-10-31T00:47:32.166245504Z" level=info msg="StartContainer for \"1787394243b1c7bbf5dfe0a8e11b47b1a7e58cd97e52f9b255ef273e2ef7c888\" returns successfully" Oct 31 00:47:32.219059 containerd[1459]: time="2025-10-31T00:47:32.218994186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-crxzd,Uid:c34f3c9d-cb0e-461c-822a-f8d538c3dddc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"17a10b8cc93578f8f8f4ec547dbb0443353b1a1d94cf14df26b6d8fcde92777f\"" Oct 31 00:47:32.221805 containerd[1459]: time="2025-10-31T00:47:32.221103326Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 00:47:32.577781 kubelet[2502]: E1031 00:47:32.577642 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:32.577781 kubelet[2502]: E1031 00:47:32.577681 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:32.598178 kubelet[2502]: I1031 00:47:32.598100 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8df44" podStartSLOduration=1.5980797770000001 podStartE2EDuration="1.598079777s" podCreationTimestamp="2025-10-31 00:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:47:32.597941693 +0000 UTC m=+8.137489853" watchObservedRunningTime="2025-10-31 00:47:32.598079777 +0000 UTC m=+8.137627937" Oct 31 00:47:34.223554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371858365.mount: Deactivated successfully. Oct 31 00:47:34.474535 kubelet[2502]: E1031 00:47:34.472140 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:34.581503 kubelet[2502]: E1031 00:47:34.581443 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:35.583500 kubelet[2502]: E1031 00:47:35.583396 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:36.332693 containerd[1459]: time="2025-10-31T00:47:36.332602437Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:36.334962 containerd[1459]: time="2025-10-31T00:47:36.334882305Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 00:47:36.336489 containerd[1459]: time="2025-10-31T00:47:36.336440530Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:36.340821 containerd[1459]: time="2025-10-31T00:47:36.340747315Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:36.341920 containerd[1459]: time="2025-10-31T00:47:36.341865722Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.120706429s" Oct 31 00:47:36.341920 containerd[1459]: time="2025-10-31T00:47:36.341915007Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 00:47:36.346870 containerd[1459]: time="2025-10-31T00:47:36.346838555Z" level=info msg="CreateContainer within sandbox \"17a10b8cc93578f8f8f4ec547dbb0443353b1a1d94cf14df26b6d8fcde92777f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 00:47:36.363976 containerd[1459]: time="2025-10-31T00:47:36.363890591Z" level=info msg="CreateContainer within sandbox \"17a10b8cc93578f8f8f4ec547dbb0443353b1a1d94cf14df26b6d8fcde92777f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"32a1a31a76ec3fa39ecd06f5e90643f0f4b9202859016895a514439217164887\"" Oct 31 00:47:36.364853 containerd[1459]: time="2025-10-31T00:47:36.364785743Z" level=info msg="StartContainer for \"32a1a31a76ec3fa39ecd06f5e90643f0f4b9202859016895a514439217164887\"" Oct 31 00:47:36.494519 systemd[1]: Started cri-containerd-32a1a31a76ec3fa39ecd06f5e90643f0f4b9202859016895a514439217164887.scope - libcontainer container 32a1a31a76ec3fa39ecd06f5e90643f0f4b9202859016895a514439217164887. Oct 31 00:47:36.846932 containerd[1459]: time="2025-10-31T00:47:36.846886358Z" level=info msg="StartContainer for \"32a1a31a76ec3fa39ecd06f5e90643f0f4b9202859016895a514439217164887\" returns successfully" Oct 31 00:47:37.559278 kubelet[2502]: E1031 00:47:37.559113 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:38.010085 kubelet[2502]: I1031 00:47:38.010010 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-crxzd" podStartSLOduration=2.887357093 podStartE2EDuration="7.009990908s" podCreationTimestamp="2025-10-31 00:47:31 +0000 UTC" firstStartedPulling="2025-10-31 00:47:32.220170564 +0000 UTC m=+7.759718714" lastFinishedPulling="2025-10-31 00:47:36.342804368 +0000 UTC m=+11.882352529" observedRunningTime="2025-10-31 00:47:38.009772453 +0000 UTC m=+13.549320623" watchObservedRunningTime="2025-10-31 00:47:38.009990908 +0000 UTC m=+13.549539068" Oct 31 00:47:38.530218 update_engine[1450]: I20251031 00:47:38.530102 1450 update_attempter.cc:509] Updating boot flags... Oct 31 00:47:38.698443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2865) Oct 31 00:47:38.769479 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2867) Oct 31 00:47:43.479949 sudo[1640]: pam_unix(sudo:session): session closed for user root Oct 31 00:47:43.485255 sshd[1635]: pam_unix(sshd:session): session closed for user core Oct 31 00:47:43.492173 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Oct 31 00:47:43.492744 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:59350.service: Deactivated successfully. Oct 31 00:47:43.495749 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 00:47:43.495946 systemd[1]: session-7.scope: Consumed 4.702s CPU time, 159.8M memory peak, 0B memory swap peak. Oct 31 00:47:43.496821 systemd-logind[1449]: Removed session 7. Oct 31 00:47:47.686753 systemd[1]: Created slice kubepods-besteffort-pod72335cb2_4879_4081_b5b6_9c4be0b43c99.slice - libcontainer container kubepods-besteffort-pod72335cb2_4879_4081_b5b6_9c4be0b43c99.slice. Oct 31 00:47:47.803445 kubelet[2502]: I1031 00:47:47.803334 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72335cb2-4879-4081-b5b6-9c4be0b43c99-tigera-ca-bundle\") pod \"calico-typha-5d9d9fd69c-d9rw4\" (UID: \"72335cb2-4879-4081-b5b6-9c4be0b43c99\") " pod="calico-system/calico-typha-5d9d9fd69c-d9rw4" Oct 31 00:47:47.804120 kubelet[2502]: I1031 00:47:47.803539 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/72335cb2-4879-4081-b5b6-9c4be0b43c99-typha-certs\") pod \"calico-typha-5d9d9fd69c-d9rw4\" (UID: \"72335cb2-4879-4081-b5b6-9c4be0b43c99\") " pod="calico-system/calico-typha-5d9d9fd69c-d9rw4" Oct 31 00:47:47.804120 kubelet[2502]: I1031 00:47:47.803568 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52pz2\" (UniqueName: \"kubernetes.io/projected/72335cb2-4879-4081-b5b6-9c4be0b43c99-kube-api-access-52pz2\") pod \"calico-typha-5d9d9fd69c-d9rw4\" (UID: \"72335cb2-4879-4081-b5b6-9c4be0b43c99\") " pod="calico-system/calico-typha-5d9d9fd69c-d9rw4" Oct 31 00:47:47.876460 systemd[1]: Created slice kubepods-besteffort-pod7b9d72bf_0477_4de4_b16a_638f9f1df922.slice - libcontainer container kubepods-besteffort-pod7b9d72bf_0477_4de4_b16a_638f9f1df922.slice. Oct 31 00:47:47.904778 kubelet[2502]: I1031 00:47:47.904709 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7b9d72bf-0477-4de4-b16a-638f9f1df922-flexvol-driver-host\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.904778 kubelet[2502]: I1031 00:47:47.904779 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b9d72bf-0477-4de4-b16a-638f9f1df922-var-lib-calico\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.905021 kubelet[2502]: I1031 00:47:47.904802 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b9d72bf-0477-4de4-b16a-638f9f1df922-xtables-lock\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.905021 kubelet[2502]: I1031 00:47:47.904823 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7b9d72bf-0477-4de4-b16a-638f9f1df922-cni-bin-dir\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.905021 kubelet[2502]: I1031 00:47:47.904842 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7b9d72bf-0477-4de4-b16a-638f9f1df922-policysync\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.905021 kubelet[2502]: I1031 00:47:47.904863 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7b9d72bf-0477-4de4-b16a-638f9f1df922-var-run-calico\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.905021 kubelet[2502]: I1031 00:47:47.904898 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7b9d72bf-0477-4de4-b16a-638f9f1df922-cni-log-dir\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.905203 kubelet[2502]: I1031 00:47:47.904916 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7b9d72bf-0477-4de4-b16a-638f9f1df922-cni-net-dir\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.905203 kubelet[2502]: I1031 00:47:47.904987 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b9d72bf-0477-4de4-b16a-638f9f1df922-tigera-ca-bundle\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.905203 kubelet[2502]: I1031 00:47:47.905064 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b9d72bf-0477-4de4-b16a-638f9f1df922-lib-modules\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.905203 kubelet[2502]: I1031 00:47:47.905082 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7b9d72bf-0477-4de4-b16a-638f9f1df922-node-certs\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.905203 kubelet[2502]: I1031 00:47:47.905113 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgkbj\" (UniqueName: \"kubernetes.io/projected/7b9d72bf-0477-4de4-b16a-638f9f1df922-kube-api-access-mgkbj\") pod \"calico-node-v54km\" (UID: \"7b9d72bf-0477-4de4-b16a-638f9f1df922\") " pod="calico-system/calico-node-v54km" Oct 31 00:47:47.993482 kubelet[2502]: E1031 00:47:47.993328 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:47.994371 containerd[1459]: time="2025-10-31T00:47:47.994325602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d9d9fd69c-d9rw4,Uid:72335cb2-4879-4081-b5b6-9c4be0b43c99,Namespace:calico-system,Attempt:0,}" Oct 31 00:47:48.012038 kubelet[2502]: E1031 00:47:48.012003 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.012038 kubelet[2502]: W1031 00:47:48.012029 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.012226 kubelet[2502]: E1031 00:47:48.012087 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.021949 kubelet[2502]: E1031 00:47:48.021852 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.021949 kubelet[2502]: W1031 00:47:48.021876 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.021949 kubelet[2502]: E1031 00:47:48.021899 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.026680 containerd[1459]: time="2025-10-31T00:47:48.026317308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:47:48.026680 containerd[1459]: time="2025-10-31T00:47:48.026424340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:47:48.026680 containerd[1459]: time="2025-10-31T00:47:48.026444739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:48.026680 containerd[1459]: time="2025-10-31T00:47:48.026576858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:48.053704 systemd[1]: Started cri-containerd-2a0bb5a85ac2d6a9083f3735a284abf7f0cecd29ec235bf5b96dd0ea52903d3d.scope - libcontainer container 2a0bb5a85ac2d6a9083f3735a284abf7f0cecd29ec235bf5b96dd0ea52903d3d. Oct 31 00:47:48.069453 kubelet[2502]: E1031 00:47:48.068782 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:47:48.078476 kubelet[2502]: E1031 00:47:48.078114 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.078476 kubelet[2502]: W1031 00:47:48.078142 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.078476 kubelet[2502]: E1031 00:47:48.078180 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.078728 kubelet[2502]: E1031 00:47:48.078505 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.078728 kubelet[2502]: W1031 00:47:48.078518 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.078728 kubelet[2502]: E1031 00:47:48.078557 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.079041 kubelet[2502]: E1031 00:47:48.079012 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.079041 kubelet[2502]: W1031 00:47:48.079028 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.079041 kubelet[2502]: E1031 00:47:48.079039 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.079662 kubelet[2502]: E1031 00:47:48.079398 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.079662 kubelet[2502]: W1031 00:47:48.079473 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.079662 kubelet[2502]: E1031 00:47:48.079485 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.079905 kubelet[2502]: E1031 00:47:48.079826 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.079905 kubelet[2502]: W1031 00:47:48.079838 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.079905 kubelet[2502]: E1031 00:47:48.079848 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.080168 kubelet[2502]: E1031 00:47:48.080098 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.080168 kubelet[2502]: W1031 00:47:48.080152 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.080168 kubelet[2502]: E1031 00:47:48.080166 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.080507 kubelet[2502]: E1031 00:47:48.080459 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.080507 kubelet[2502]: W1031 00:47:48.080471 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.080507 kubelet[2502]: E1031 00:47:48.080482 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.080766 kubelet[2502]: E1031 00:47:48.080751 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.080766 kubelet[2502]: W1031 00:47:48.080761 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.080837 kubelet[2502]: E1031 00:47:48.080772 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.081461 kubelet[2502]: E1031 00:47:48.080987 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.081461 kubelet[2502]: W1031 00:47:48.080998 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.081461 kubelet[2502]: E1031 00:47:48.081008 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.081461 kubelet[2502]: E1031 00:47:48.081303 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.081461 kubelet[2502]: W1031 00:47:48.081313 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.081461 kubelet[2502]: E1031 00:47:48.081323 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.081935 kubelet[2502]: E1031 00:47:48.081915 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.081935 kubelet[2502]: W1031 00:47:48.081930 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.082022 kubelet[2502]: E1031 00:47:48.081941 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.082286 kubelet[2502]: E1031 00:47:48.082268 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.082286 kubelet[2502]: W1031 00:47:48.082282 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.082354 kubelet[2502]: E1031 00:47:48.082293 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.082631 kubelet[2502]: E1031 00:47:48.082614 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.082631 kubelet[2502]: W1031 00:47:48.082630 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.082702 kubelet[2502]: E1031 00:47:48.082643 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.082911 kubelet[2502]: E1031 00:47:48.082895 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.082911 kubelet[2502]: W1031 00:47:48.082908 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.082972 kubelet[2502]: E1031 00:47:48.082920 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.083193 kubelet[2502]: E1031 00:47:48.083176 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.083244 kubelet[2502]: W1031 00:47:48.083199 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.083244 kubelet[2502]: E1031 00:47:48.083211 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.083491 kubelet[2502]: E1031 00:47:48.083474 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.083491 kubelet[2502]: W1031 00:47:48.083487 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.083565 kubelet[2502]: E1031 00:47:48.083498 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.084025 kubelet[2502]: E1031 00:47:48.083953 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.084025 kubelet[2502]: W1031 00:47:48.083982 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.084025 kubelet[2502]: E1031 00:47:48.083993 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.084664 kubelet[2502]: E1031 00:47:48.084646 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.084706 kubelet[2502]: W1031 00:47:48.084679 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.084706 kubelet[2502]: E1031 00:47:48.084692 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.084971 kubelet[2502]: E1031 00:47:48.084952 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.085010 kubelet[2502]: W1031 00:47:48.084975 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.085010 kubelet[2502]: E1031 00:47:48.084988 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.085230 kubelet[2502]: E1031 00:47:48.085213 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.085230 kubelet[2502]: W1031 00:47:48.085227 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.085284 kubelet[2502]: E1031 00:47:48.085238 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.108142 kubelet[2502]: E1031 00:47:48.108096 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.108142 kubelet[2502]: W1031 00:47:48.108130 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.108142 kubelet[2502]: E1031 00:47:48.108154 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.108374 kubelet[2502]: I1031 00:47:48.108203 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d615dcdd-9217-4b99-9985-812be6d75b53-registration-dir\") pod \"csi-node-driver-cznnv\" (UID: \"d615dcdd-9217-4b99-9985-812be6d75b53\") " pod="calico-system/csi-node-driver-cznnv" Oct 31 00:47:48.108619 kubelet[2502]: E1031 00:47:48.108591 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.108619 kubelet[2502]: W1031 00:47:48.108608 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.108683 kubelet[2502]: E1031 00:47:48.108620 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.108725 kubelet[2502]: I1031 00:47:48.108706 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d615dcdd-9217-4b99-9985-812be6d75b53-varrun\") pod \"csi-node-driver-cznnv\" (UID: \"d615dcdd-9217-4b99-9985-812be6d75b53\") " pod="calico-system/csi-node-driver-cznnv" Oct 31 00:47:48.109110 kubelet[2502]: E1031 00:47:48.109082 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.109150 kubelet[2502]: W1031 00:47:48.109122 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.109150 kubelet[2502]: E1031 00:47:48.109135 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.109203 kubelet[2502]: I1031 00:47:48.109158 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzh5d\" (UniqueName: \"kubernetes.io/projected/d615dcdd-9217-4b99-9985-812be6d75b53-kube-api-access-jzh5d\") pod \"csi-node-driver-cznnv\" (UID: \"d615dcdd-9217-4b99-9985-812be6d75b53\") " pod="calico-system/csi-node-driver-cznnv" Oct 31 00:47:48.109452 kubelet[2502]: E1031 00:47:48.109435 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.109452 kubelet[2502]: W1031 00:47:48.109450 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.109521 kubelet[2502]: E1031 00:47:48.109463 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.109691 kubelet[2502]: E1031 00:47:48.109676 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.109691 kubelet[2502]: W1031 00:47:48.109688 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.109751 kubelet[2502]: E1031 00:47:48.109698 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.109929 kubelet[2502]: E1031 00:47:48.109913 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.109929 kubelet[2502]: W1031 00:47:48.109925 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.109995 kubelet[2502]: E1031 00:47:48.109935 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.110159 kubelet[2502]: E1031 00:47:48.110144 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.110159 kubelet[2502]: W1031 00:47:48.110156 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.110220 kubelet[2502]: E1031 00:47:48.110166 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.110220 kubelet[2502]: I1031 00:47:48.110192 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d615dcdd-9217-4b99-9985-812be6d75b53-kubelet-dir\") pod \"csi-node-driver-cznnv\" (UID: \"d615dcdd-9217-4b99-9985-812be6d75b53\") " pod="calico-system/csi-node-driver-cznnv" Oct 31 00:47:48.110671 kubelet[2502]: E1031 00:47:48.110466 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.110671 kubelet[2502]: W1031 00:47:48.110483 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.110671 kubelet[2502]: E1031 00:47:48.110495 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.110808 kubelet[2502]: E1031 00:47:48.110789 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.110808 kubelet[2502]: W1031 00:47:48.110803 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.110895 kubelet[2502]: E1031 00:47:48.110833 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.111224 kubelet[2502]: E1031 00:47:48.111204 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.111224 kubelet[2502]: W1031 00:47:48.111219 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.111308 kubelet[2502]: E1031 00:47:48.111241 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.111544 kubelet[2502]: E1031 00:47:48.111523 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.111544 kubelet[2502]: W1031 00:47:48.111536 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.111641 kubelet[2502]: E1031 00:47:48.111548 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.111842 kubelet[2502]: E1031 00:47:48.111821 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.111842 kubelet[2502]: W1031 00:47:48.111837 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.111927 kubelet[2502]: E1031 00:47:48.111848 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.112141 kubelet[2502]: E1031 00:47:48.112122 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.112181 kubelet[2502]: W1031 00:47:48.112152 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.112181 kubelet[2502]: E1031 00:47:48.112163 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.112247 kubelet[2502]: I1031 00:47:48.112184 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d615dcdd-9217-4b99-9985-812be6d75b53-socket-dir\") pod \"csi-node-driver-cznnv\" (UID: \"d615dcdd-9217-4b99-9985-812be6d75b53\") " pod="calico-system/csi-node-driver-cznnv" Oct 31 00:47:48.112553 kubelet[2502]: E1031 00:47:48.112532 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.112553 kubelet[2502]: W1031 00:47:48.112548 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.112642 kubelet[2502]: E1031 00:47:48.112559 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.112804 kubelet[2502]: E1031 00:47:48.112787 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.112804 kubelet[2502]: W1031 00:47:48.112799 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.112876 kubelet[2502]: E1031 00:47:48.112809 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.117507 containerd[1459]: time="2025-10-31T00:47:48.117461687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d9d9fd69c-d9rw4,Uid:72335cb2-4879-4081-b5b6-9c4be0b43c99,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a0bb5a85ac2d6a9083f3735a284abf7f0cecd29ec235bf5b96dd0ea52903d3d\"" Oct 31 00:47:48.121998 kubelet[2502]: E1031 00:47:48.121945 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:48.123048 containerd[1459]: time="2025-10-31T00:47:48.123016975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 00:47:48.181482 kubelet[2502]: E1031 00:47:48.181430 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:48.182203 containerd[1459]: time="2025-10-31T00:47:48.182163753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v54km,Uid:7b9d72bf-0477-4de4-b16a-638f9f1df922,Namespace:calico-system,Attempt:0,}" Oct 31 00:47:48.213497 kubelet[2502]: E1031 00:47:48.213379 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.214093 kubelet[2502]: W1031 00:47:48.214062 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.214168 kubelet[2502]: E1031 00:47:48.214128 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.214970 kubelet[2502]: E1031 00:47:48.214954 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.214970 kubelet[2502]: W1031 00:47:48.214967 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.215130 kubelet[2502]: E1031 00:47:48.214978 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.215326 kubelet[2502]: E1031 00:47:48.215255 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.215326 kubelet[2502]: W1031 00:47:48.215269 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.215326 kubelet[2502]: E1031 00:47:48.215281 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.215582 kubelet[2502]: E1031 00:47:48.215563 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.215582 kubelet[2502]: W1031 00:47:48.215579 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.215685 kubelet[2502]: E1031 00:47:48.215591 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.215825 kubelet[2502]: E1031 00:47:48.215812 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.215825 kubelet[2502]: W1031 00:47:48.215823 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.215913 kubelet[2502]: E1031 00:47:48.215832 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.216055 kubelet[2502]: E1031 00:47:48.216024 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.216055 kubelet[2502]: W1031 00:47:48.216035 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.216055 kubelet[2502]: E1031 00:47:48.216046 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.217849 kubelet[2502]: E1031 00:47:48.217834 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.217849 kubelet[2502]: W1031 00:47:48.217848 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.217929 kubelet[2502]: E1031 00:47:48.217859 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.218096 kubelet[2502]: E1031 00:47:48.218050 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.218096 kubelet[2502]: W1031 00:47:48.218063 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.218096 kubelet[2502]: E1031 00:47:48.218073 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.218335 kubelet[2502]: E1031 00:47:48.218305 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.218335 kubelet[2502]: W1031 00:47:48.218315 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.218335 kubelet[2502]: E1031 00:47:48.218326 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.218580 kubelet[2502]: E1031 00:47:48.218566 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.218580 kubelet[2502]: W1031 00:47:48.218577 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.218652 kubelet[2502]: E1031 00:47:48.218588 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.218852 kubelet[2502]: E1031 00:47:48.218838 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.218852 kubelet[2502]: W1031 00:47:48.218849 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.218936 kubelet[2502]: E1031 00:47:48.218859 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.219080 kubelet[2502]: E1031 00:47:48.219067 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.219080 kubelet[2502]: W1031 00:47:48.219078 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.219166 kubelet[2502]: E1031 00:47:48.219089 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.219363 kubelet[2502]: E1031 00:47:48.219348 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.219363 kubelet[2502]: W1031 00:47:48.219360 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.219514 kubelet[2502]: E1031 00:47:48.219370 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.219648 containerd[1459]: time="2025-10-31T00:47:48.219255842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:47:48.219770 kubelet[2502]: E1031 00:47:48.219755 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.219770 kubelet[2502]: W1031 00:47:48.219768 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.219848 kubelet[2502]: E1031 00:47:48.219779 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.220297 kubelet[2502]: E1031 00:47:48.220280 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.220359 kubelet[2502]: W1031 00:47:48.220304 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.220359 kubelet[2502]: E1031 00:47:48.220318 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.220858 kubelet[2502]: E1031 00:47:48.220836 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.220858 kubelet[2502]: W1031 00:47:48.220849 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.220925 kubelet[2502]: E1031 00:47:48.220862 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.221111 kubelet[2502]: E1031 00:47:48.221088 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.221111 kubelet[2502]: W1031 00:47:48.221109 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.221190 kubelet[2502]: E1031 00:47:48.221120 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.221458 containerd[1459]: time="2025-10-31T00:47:48.220969698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:47:48.221458 containerd[1459]: time="2025-10-31T00:47:48.220997801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:48.221458 containerd[1459]: time="2025-10-31T00:47:48.221257963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:47:48.221706 kubelet[2502]: E1031 00:47:48.221690 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.221706 kubelet[2502]: W1031 00:47:48.221703 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.221790 kubelet[2502]: E1031 00:47:48.221715 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.222303 kubelet[2502]: E1031 00:47:48.222276 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.222303 kubelet[2502]: W1031 00:47:48.222290 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.222303 kubelet[2502]: E1031 00:47:48.222301 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.223667 kubelet[2502]: E1031 00:47:48.223618 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.223716 kubelet[2502]: W1031 00:47:48.223660 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.223716 kubelet[2502]: E1031 00:47:48.223705 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.224113 kubelet[2502]: E1031 00:47:48.224083 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.224164 kubelet[2502]: W1031 00:47:48.224117 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.224164 kubelet[2502]: E1031 00:47:48.224138 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.224495 kubelet[2502]: E1031 00:47:48.224473 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.224537 kubelet[2502]: W1031 00:47:48.224497 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.224537 kubelet[2502]: E1031 00:47:48.224517 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.224824 kubelet[2502]: E1031 00:47:48.224802 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.224870 kubelet[2502]: W1031 00:47:48.224828 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.224870 kubelet[2502]: E1031 00:47:48.224850 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.225228 kubelet[2502]: E1031 00:47:48.225207 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.225228 kubelet[2502]: W1031 00:47:48.225225 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.225312 kubelet[2502]: E1031 00:47:48.225241 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.225697 kubelet[2502]: E1031 00:47:48.225675 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.225739 kubelet[2502]: W1031 00:47:48.225696 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.225739 kubelet[2502]: E1031 00:47:48.225715 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.236696 kubelet[2502]: E1031 00:47:48.236597 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:48.236696 kubelet[2502]: W1031 00:47:48.236621 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:48.236696 kubelet[2502]: E1031 00:47:48.236646 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:48.249825 systemd[1]: Started cri-containerd-23442d4fa5084966ac8b3c8f1705f1d7e5c8d96def7d9efba47b283cd1cd8d6e.scope - libcontainer container 23442d4fa5084966ac8b3c8f1705f1d7e5c8d96def7d9efba47b283cd1cd8d6e. Oct 31 00:47:48.279919 containerd[1459]: time="2025-10-31T00:47:48.279295506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v54km,Uid:7b9d72bf-0477-4de4-b16a-638f9f1df922,Namespace:calico-system,Attempt:0,} returns sandbox id \"23442d4fa5084966ac8b3c8f1705f1d7e5c8d96def7d9efba47b283cd1cd8d6e\"" Oct 31 00:47:48.280942 kubelet[2502]: E1031 00:47:48.280910 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:49.553831 kubelet[2502]: E1031 00:47:49.553754 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:47:50.062177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289054628.mount: Deactivated successfully. Oct 31 00:47:51.401495 containerd[1459]: time="2025-10-31T00:47:51.401435090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:51.403619 containerd[1459]: time="2025-10-31T00:47:51.403562151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 00:47:51.405522 containerd[1459]: time="2025-10-31T00:47:51.405495657Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:51.408953 containerd[1459]: time="2025-10-31T00:47:51.408731870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:51.409665 containerd[1459]: time="2025-10-31T00:47:51.409603464Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.286545332s" Oct 31 00:47:51.409665 containerd[1459]: time="2025-10-31T00:47:51.409647396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 00:47:51.410793 containerd[1459]: time="2025-10-31T00:47:51.410755196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 00:47:51.435327 containerd[1459]: time="2025-10-31T00:47:51.435262009Z" level=info msg="CreateContainer within sandbox \"2a0bb5a85ac2d6a9083f3735a284abf7f0cecd29ec235bf5b96dd0ea52903d3d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 00:47:51.553520 kubelet[2502]: E1031 00:47:51.553470 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:47:51.583300 containerd[1459]: time="2025-10-31T00:47:51.583216316Z" level=info msg="CreateContainer within sandbox \"2a0bb5a85ac2d6a9083f3735a284abf7f0cecd29ec235bf5b96dd0ea52903d3d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"05fdd4e11552ca401b3b87d6ab9d4c6c22cb2144bea3f5085077086c389d170f\"" Oct 31 00:47:51.585319 containerd[1459]: time="2025-10-31T00:47:51.584956687Z" level=info msg="StartContainer for \"05fdd4e11552ca401b3b87d6ab9d4c6c22cb2144bea3f5085077086c389d170f\"" Oct 31 00:47:51.628887 systemd[1]: Started cri-containerd-05fdd4e11552ca401b3b87d6ab9d4c6c22cb2144bea3f5085077086c389d170f.scope - libcontainer container 05fdd4e11552ca401b3b87d6ab9d4c6c22cb2144bea3f5085077086c389d170f. Oct 31 00:47:51.824817 containerd[1459]: time="2025-10-31T00:47:51.824643880Z" level=info msg="StartContainer for \"05fdd4e11552ca401b3b87d6ab9d4c6c22cb2144bea3f5085077086c389d170f\" returns successfully" Oct 31 00:47:51.895475 kubelet[2502]: E1031 00:47:51.895398 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:51.933084 kubelet[2502]: E1031 00:47:51.933029 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.933084 kubelet[2502]: W1031 00:47:51.933061 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.933084 kubelet[2502]: E1031 00:47:51.933088 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.933416 kubelet[2502]: E1031 00:47:51.933385 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.933416 kubelet[2502]: W1031 00:47:51.933396 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.933471 kubelet[2502]: E1031 00:47:51.933419 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.933618 kubelet[2502]: E1031 00:47:51.933596 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.933618 kubelet[2502]: W1031 00:47:51.933605 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.933618 kubelet[2502]: E1031 00:47:51.933612 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.933852 kubelet[2502]: E1031 00:47:51.933829 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.933852 kubelet[2502]: W1031 00:47:51.933838 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.933852 kubelet[2502]: E1031 00:47:51.933846 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.934153 kubelet[2502]: E1031 00:47:51.934129 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.934153 kubelet[2502]: W1031 00:47:51.934139 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.934153 kubelet[2502]: E1031 00:47:51.934148 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.934350 kubelet[2502]: E1031 00:47:51.934339 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.934350 kubelet[2502]: W1031 00:47:51.934347 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.934436 kubelet[2502]: E1031 00:47:51.934354 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.934560 kubelet[2502]: E1031 00:47:51.934541 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.934560 kubelet[2502]: W1031 00:47:51.934549 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.934560 kubelet[2502]: E1031 00:47:51.934556 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.934740 kubelet[2502]: E1031 00:47:51.934726 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.934740 kubelet[2502]: W1031 00:47:51.934735 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.934796 kubelet[2502]: E1031 00:47:51.934742 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.934952 kubelet[2502]: E1031 00:47:51.934934 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.934952 kubelet[2502]: W1031 00:47:51.934944 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.934952 kubelet[2502]: E1031 00:47:51.934953 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.935150 kubelet[2502]: E1031 00:47:51.935133 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.935150 kubelet[2502]: W1031 00:47:51.935141 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.935150 kubelet[2502]: E1031 00:47:51.935149 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.935356 kubelet[2502]: E1031 00:47:51.935339 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.935356 kubelet[2502]: W1031 00:47:51.935347 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.935356 kubelet[2502]: E1031 00:47:51.935354 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.935551 kubelet[2502]: E1031 00:47:51.935536 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.935551 kubelet[2502]: W1031 00:47:51.935544 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.935551 kubelet[2502]: E1031 00:47:51.935552 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.935818 kubelet[2502]: E1031 00:47:51.935799 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.935818 kubelet[2502]: W1031 00:47:51.935808 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.935818 kubelet[2502]: E1031 00:47:51.935815 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.936030 kubelet[2502]: E1031 00:47:51.936011 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.936030 kubelet[2502]: W1031 00:47:51.936020 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.936030 kubelet[2502]: E1031 00:47:51.936027 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.936226 kubelet[2502]: E1031 00:47:51.936208 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.936226 kubelet[2502]: W1031 00:47:51.936216 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.936226 kubelet[2502]: E1031 00:47:51.936223 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.951830 kubelet[2502]: E1031 00:47:51.951795 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.951830 kubelet[2502]: W1031 00:47:51.951816 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.951919 kubelet[2502]: E1031 00:47:51.951839 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.952164 kubelet[2502]: E1031 00:47:51.952132 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.952164 kubelet[2502]: W1031 00:47:51.952143 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.952164 kubelet[2502]: E1031 00:47:51.952152 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.952441 kubelet[2502]: E1031 00:47:51.952418 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.952441 kubelet[2502]: W1031 00:47:51.952428 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.952441 kubelet[2502]: E1031 00:47:51.952437 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.952927 kubelet[2502]: E1031 00:47:51.952875 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.952982 kubelet[2502]: W1031 00:47:51.952918 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.952982 kubelet[2502]: E1031 00:47:51.952957 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.953289 kubelet[2502]: E1031 00:47:51.953265 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.953289 kubelet[2502]: W1031 00:47:51.953285 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.953360 kubelet[2502]: E1031 00:47:51.953305 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.953616 kubelet[2502]: E1031 00:47:51.953596 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.953616 kubelet[2502]: W1031 00:47:51.953609 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.953616 kubelet[2502]: E1031 00:47:51.953618 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.953899 kubelet[2502]: E1031 00:47:51.953874 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.953899 kubelet[2502]: W1031 00:47:51.953895 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.953967 kubelet[2502]: E1031 00:47:51.953915 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.954253 kubelet[2502]: E1031 00:47:51.954229 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.954300 kubelet[2502]: W1031 00:47:51.954249 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.954300 kubelet[2502]: E1031 00:47:51.954270 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.954620 kubelet[2502]: E1031 00:47:51.954597 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.954662 kubelet[2502]: W1031 00:47:51.954616 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.954662 kubelet[2502]: E1031 00:47:51.954637 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.954985 kubelet[2502]: E1031 00:47:51.954963 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.954985 kubelet[2502]: W1031 00:47:51.954981 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.955308 kubelet[2502]: E1031 00:47:51.954993 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.955308 kubelet[2502]: E1031 00:47:51.955213 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.955308 kubelet[2502]: W1031 00:47:51.955222 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.955308 kubelet[2502]: E1031 00:47:51.955233 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.955554 kubelet[2502]: E1031 00:47:51.955521 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.955554 kubelet[2502]: W1031 00:47:51.955535 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.955554 kubelet[2502]: E1031 00:47:51.955548 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.955859 kubelet[2502]: E1031 00:47:51.955746 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.955859 kubelet[2502]: W1031 00:47:51.955755 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.955859 kubelet[2502]: E1031 00:47:51.955765 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.955993 kubelet[2502]: E1031 00:47:51.955981 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.955993 kubelet[2502]: W1031 00:47:51.955991 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.956107 kubelet[2502]: E1031 00:47:51.956002 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.956251 kubelet[2502]: E1031 00:47:51.956229 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.956251 kubelet[2502]: W1031 00:47:51.956242 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.956251 kubelet[2502]: E1031 00:47:51.956253 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.956491 kubelet[2502]: E1031 00:47:51.956465 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.956491 kubelet[2502]: W1031 00:47:51.956478 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.956491 kubelet[2502]: E1031 00:47:51.956488 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.956740 kubelet[2502]: E1031 00:47:51.956719 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.956740 kubelet[2502]: W1031 00:47:51.956731 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.956740 kubelet[2502]: E1031 00:47:51.956742 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:51.957303 kubelet[2502]: E1031 00:47:51.957281 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:51.957303 kubelet[2502]: W1031 00:47:51.957294 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:51.957303 kubelet[2502]: E1031 00:47:51.957305 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.052585 kubelet[2502]: I1031 00:47:52.052495 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d9d9fd69c-d9rw4" podStartSLOduration=1.7645251910000002 podStartE2EDuration="5.052471484s" podCreationTimestamp="2025-10-31 00:47:47 +0000 UTC" firstStartedPulling="2025-10-31 00:47:48.122684818 +0000 UTC m=+23.662232978" lastFinishedPulling="2025-10-31 00:47:51.410631111 +0000 UTC m=+26.950179271" observedRunningTime="2025-10-31 00:47:52.019790507 +0000 UTC m=+27.559338667" watchObservedRunningTime="2025-10-31 00:47:52.052471484 +0000 UTC m=+27.592019644" Oct 31 00:47:52.925990 kubelet[2502]: I1031 00:47:52.925907 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:47:52.926603 kubelet[2502]: E1031 00:47:52.926571 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:52.942999 kubelet[2502]: E1031 00:47:52.942949 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.942999 kubelet[2502]: W1031 00:47:52.942977 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.942999 kubelet[2502]: E1031 00:47:52.943002 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.943487 kubelet[2502]: E1031 00:47:52.943449 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.943487 kubelet[2502]: W1031 00:47:52.943483 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.943582 kubelet[2502]: E1031 00:47:52.943517 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.943870 kubelet[2502]: E1031 00:47:52.943852 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.943870 kubelet[2502]: W1031 00:47:52.943870 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.943947 kubelet[2502]: E1031 00:47:52.943880 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.944114 kubelet[2502]: E1031 00:47:52.944089 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.944114 kubelet[2502]: W1031 00:47:52.944102 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.944114 kubelet[2502]: E1031 00:47:52.944111 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.944529 kubelet[2502]: E1031 00:47:52.944512 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.944529 kubelet[2502]: W1031 00:47:52.944524 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.944608 kubelet[2502]: E1031 00:47:52.944534 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.944734 kubelet[2502]: E1031 00:47:52.944718 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.944734 kubelet[2502]: W1031 00:47:52.944730 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.944792 kubelet[2502]: E1031 00:47:52.944739 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.944961 kubelet[2502]: E1031 00:47:52.944946 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.944961 kubelet[2502]: W1031 00:47:52.944956 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.945047 kubelet[2502]: E1031 00:47:52.944966 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.945201 kubelet[2502]: E1031 00:47:52.945185 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.945201 kubelet[2502]: W1031 00:47:52.945195 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.945266 kubelet[2502]: E1031 00:47:52.945205 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.945453 kubelet[2502]: E1031 00:47:52.945437 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.945453 kubelet[2502]: W1031 00:47:52.945448 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.945524 kubelet[2502]: E1031 00:47:52.945458 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.945673 kubelet[2502]: E1031 00:47:52.945657 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.945673 kubelet[2502]: W1031 00:47:52.945668 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.945732 kubelet[2502]: E1031 00:47:52.945677 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.945888 kubelet[2502]: E1031 00:47:52.945874 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.945888 kubelet[2502]: W1031 00:47:52.945884 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.945947 kubelet[2502]: E1031 00:47:52.945894 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.946142 kubelet[2502]: E1031 00:47:52.946123 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.946142 kubelet[2502]: W1031 00:47:52.946138 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.946204 kubelet[2502]: E1031 00:47:52.946152 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.946452 kubelet[2502]: E1031 00:47:52.946434 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.946452 kubelet[2502]: W1031 00:47:52.946447 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.946543 kubelet[2502]: E1031 00:47:52.946457 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.946688 kubelet[2502]: E1031 00:47:52.946672 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.946688 kubelet[2502]: W1031 00:47:52.946685 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.946746 kubelet[2502]: E1031 00:47:52.946694 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.946916 kubelet[2502]: E1031 00:47:52.946901 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.946916 kubelet[2502]: W1031 00:47:52.946911 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.946975 kubelet[2502]: E1031 00:47:52.946920 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.959485 kubelet[2502]: E1031 00:47:52.959434 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.959485 kubelet[2502]: W1031 00:47:52.959465 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.959485 kubelet[2502]: E1031 00:47:52.959488 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.959747 kubelet[2502]: E1031 00:47:52.959730 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.959747 kubelet[2502]: W1031 00:47:52.959740 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.959747 kubelet[2502]: E1031 00:47:52.959748 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.960079 kubelet[2502]: E1031 00:47:52.960062 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.960079 kubelet[2502]: W1031 00:47:52.960072 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.960079 kubelet[2502]: E1031 00:47:52.960080 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.960356 kubelet[2502]: E1031 00:47:52.960331 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.960356 kubelet[2502]: W1031 00:47:52.960342 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.960356 kubelet[2502]: E1031 00:47:52.960350 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.960606 kubelet[2502]: E1031 00:47:52.960589 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.960606 kubelet[2502]: W1031 00:47:52.960601 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.960671 kubelet[2502]: E1031 00:47:52.960609 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.960833 kubelet[2502]: E1031 00:47:52.960817 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.960833 kubelet[2502]: W1031 00:47:52.960826 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.960833 kubelet[2502]: E1031 00:47:52.960834 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.961055 kubelet[2502]: E1031 00:47:52.961030 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.961055 kubelet[2502]: W1031 00:47:52.961042 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.961055 kubelet[2502]: E1031 00:47:52.961050 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.961244 kubelet[2502]: E1031 00:47:52.961230 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.961244 kubelet[2502]: W1031 00:47:52.961239 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.961244 kubelet[2502]: E1031 00:47:52.961247 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.961496 kubelet[2502]: E1031 00:47:52.961481 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.961496 kubelet[2502]: W1031 00:47:52.961491 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.961556 kubelet[2502]: E1031 00:47:52.961499 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.961985 kubelet[2502]: E1031 00:47:52.961943 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.961985 kubelet[2502]: W1031 00:47:52.961977 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.962064 kubelet[2502]: E1031 00:47:52.962002 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.962231 kubelet[2502]: E1031 00:47:52.962215 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.962231 kubelet[2502]: W1031 00:47:52.962227 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.962290 kubelet[2502]: E1031 00:47:52.962237 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.962528 kubelet[2502]: E1031 00:47:52.962512 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.962528 kubelet[2502]: W1031 00:47:52.962525 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.962603 kubelet[2502]: E1031 00:47:52.962536 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.962817 kubelet[2502]: E1031 00:47:52.962800 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.962817 kubelet[2502]: W1031 00:47:52.962812 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.962884 kubelet[2502]: E1031 00:47:52.962822 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.963079 kubelet[2502]: E1031 00:47:52.963063 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.963079 kubelet[2502]: W1031 00:47:52.963075 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.963144 kubelet[2502]: E1031 00:47:52.963085 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.963300 kubelet[2502]: E1031 00:47:52.963285 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.963300 kubelet[2502]: W1031 00:47:52.963296 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.963364 kubelet[2502]: E1031 00:47:52.963306 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.963561 kubelet[2502]: E1031 00:47:52.963545 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.963561 kubelet[2502]: W1031 00:47:52.963557 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.963645 kubelet[2502]: E1031 00:47:52.963568 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.963818 kubelet[2502]: E1031 00:47:52.963801 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.963818 kubelet[2502]: W1031 00:47:52.963813 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.963883 kubelet[2502]: E1031 00:47:52.963824 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:52.964835 kubelet[2502]: E1031 00:47:52.964817 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:47:52.964835 kubelet[2502]: W1031 00:47:52.964831 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:47:52.964907 kubelet[2502]: E1031 00:47:52.964842 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:47:53.554126 kubelet[2502]: E1031 00:47:53.554055 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:47:53.907828 containerd[1459]: time="2025-10-31T00:47:53.907765192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:53.909394 containerd[1459]: time="2025-10-31T00:47:53.909322357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 00:47:53.910731 containerd[1459]: time="2025-10-31T00:47:53.910640551Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:53.913154 containerd[1459]: time="2025-10-31T00:47:53.913105847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:47:53.914430 containerd[1459]: time="2025-10-31T00:47:53.913863946Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.503073994s" Oct 31 00:47:53.914430 containerd[1459]: time="2025-10-31T00:47:53.913903350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 00:47:53.920565 containerd[1459]: time="2025-10-31T00:47:53.920502446Z" level=info msg="CreateContainer within sandbox \"23442d4fa5084966ac8b3c8f1705f1d7e5c8d96def7d9efba47b283cd1cd8d6e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 00:47:53.942195 containerd[1459]: time="2025-10-31T00:47:53.942128941Z" level=info msg="CreateContainer within sandbox \"23442d4fa5084966ac8b3c8f1705f1d7e5c8d96def7d9efba47b283cd1cd8d6e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2e137fd95c6a2c17de705e38df60e7efd0cad8155491c618a7baa8a7dd008bd6\"" Oct 31 00:47:53.942945 containerd[1459]: time="2025-10-31T00:47:53.942905144Z" level=info msg="StartContainer for \"2e137fd95c6a2c17de705e38df60e7efd0cad8155491c618a7baa8a7dd008bd6\"" Oct 31 00:47:53.984710 systemd[1]: Started cri-containerd-2e137fd95c6a2c17de705e38df60e7efd0cad8155491c618a7baa8a7dd008bd6.scope - libcontainer container 2e137fd95c6a2c17de705e38df60e7efd0cad8155491c618a7baa8a7dd008bd6. Oct 31 00:47:54.028047 containerd[1459]: time="2025-10-31T00:47:54.027962018Z" level=info msg="StartContainer for \"2e137fd95c6a2c17de705e38df60e7efd0cad8155491c618a7baa8a7dd008bd6\" returns successfully" Oct 31 00:47:54.048185 systemd[1]: cri-containerd-2e137fd95c6a2c17de705e38df60e7efd0cad8155491c618a7baa8a7dd008bd6.scope: Deactivated successfully. Oct 31 00:47:54.075864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e137fd95c6a2c17de705e38df60e7efd0cad8155491c618a7baa8a7dd008bd6-rootfs.mount: Deactivated successfully. Oct 31 00:47:54.388589 containerd[1459]: time="2025-10-31T00:47:54.388458097Z" level=info msg="shim disconnected" id=2e137fd95c6a2c17de705e38df60e7efd0cad8155491c618a7baa8a7dd008bd6 namespace=k8s.io Oct 31 00:47:54.388867 containerd[1459]: time="2025-10-31T00:47:54.388593693Z" level=warning msg="cleaning up after shim disconnected" id=2e137fd95c6a2c17de705e38df60e7efd0cad8155491c618a7baa8a7dd008bd6 namespace=k8s.io Oct 31 00:47:54.388867 containerd[1459]: time="2025-10-31T00:47:54.388625162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:47:54.912767 kubelet[2502]: E1031 00:47:54.912724 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:54.913752 containerd[1459]: time="2025-10-31T00:47:54.913616948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 00:47:55.553323 kubelet[2502]: E1031 00:47:55.553251 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:47:57.554188 kubelet[2502]: E1031 00:47:57.554093 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:47:58.851005 kubelet[2502]: I1031 00:47:58.850935 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:47:58.851455 kubelet[2502]: E1031 00:47:58.851379 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:58.919047 kubelet[2502]: E1031 00:47:58.919007 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:47:59.553549 kubelet[2502]: E1031 00:47:59.553503 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:48:00.308432 containerd[1459]: time="2025-10-31T00:48:00.308318596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:48:00.383302 containerd[1459]: time="2025-10-31T00:48:00.383219288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 00:48:00.445001 containerd[1459]: time="2025-10-31T00:48:00.444916350Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:48:00.515130 containerd[1459]: time="2025-10-31T00:48:00.514922344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:48:00.516258 containerd[1459]: time="2025-10-31T00:48:00.516213262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.602535971s" Oct 31 00:48:00.516258 containerd[1459]: time="2025-10-31T00:48:00.516251835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 00:48:00.719348 containerd[1459]: time="2025-10-31T00:48:00.719277422Z" level=info msg="CreateContainer within sandbox \"23442d4fa5084966ac8b3c8f1705f1d7e5c8d96def7d9efba47b283cd1cd8d6e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 00:48:01.189521 containerd[1459]: time="2025-10-31T00:48:01.189466893Z" level=info msg="CreateContainer within sandbox \"23442d4fa5084966ac8b3c8f1705f1d7e5c8d96def7d9efba47b283cd1cd8d6e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4503efa66e65f94ed124751d6cab736a08474615a9bbfa5bf5a240ec0b37aaab\"" Oct 31 00:48:01.189991 containerd[1459]: time="2025-10-31T00:48:01.189952135Z" level=info msg="StartContainer for \"4503efa66e65f94ed124751d6cab736a08474615a9bbfa5bf5a240ec0b37aaab\"" Oct 31 00:48:01.216196 systemd[1]: run-containerd-runc-k8s.io-4503efa66e65f94ed124751d6cab736a08474615a9bbfa5bf5a240ec0b37aaab-runc.QzXN35.mount: Deactivated successfully. Oct 31 00:48:01.222558 systemd[1]: Started cri-containerd-4503efa66e65f94ed124751d6cab736a08474615a9bbfa5bf5a240ec0b37aaab.scope - libcontainer container 4503efa66e65f94ed124751d6cab736a08474615a9bbfa5bf5a240ec0b37aaab. Oct 31 00:48:01.398230 containerd[1459]: time="2025-10-31T00:48:01.398162259Z" level=info msg="StartContainer for \"4503efa66e65f94ed124751d6cab736a08474615a9bbfa5bf5a240ec0b37aaab\" returns successfully" Oct 31 00:48:01.554533 kubelet[2502]: E1031 00:48:01.554316 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:48:01.928425 kubelet[2502]: E1031 00:48:01.925910 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:02.927461 kubelet[2502]: E1031 00:48:02.927389 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:02.934027 systemd[1]: cri-containerd-4503efa66e65f94ed124751d6cab736a08474615a9bbfa5bf5a240ec0b37aaab.scope: Deactivated successfully. Oct 31 00:48:02.959543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4503efa66e65f94ed124751d6cab736a08474615a9bbfa5bf5a240ec0b37aaab-rootfs.mount: Deactivated successfully. Oct 31 00:48:02.962971 kubelet[2502]: I1031 00:48:02.962930 2502 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 00:48:03.834127 containerd[1459]: time="2025-10-31T00:48:03.833470016Z" level=info msg="shim disconnected" id=4503efa66e65f94ed124751d6cab736a08474615a9bbfa5bf5a240ec0b37aaab namespace=k8s.io Oct 31 00:48:03.834127 containerd[1459]: time="2025-10-31T00:48:03.833567881Z" level=warning msg="cleaning up after shim disconnected" id=4503efa66e65f94ed124751d6cab736a08474615a9bbfa5bf5a240ec0b37aaab namespace=k8s.io Oct 31 00:48:03.834127 containerd[1459]: time="2025-10-31T00:48:03.833581406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:48:03.840819 systemd[1]: Created slice kubepods-besteffort-pod7d011812_0c54_49d2_a84d_25c0746a58a0.slice - libcontainer container kubepods-besteffort-pod7d011812_0c54_49d2_a84d_25c0746a58a0.slice. Oct 31 00:48:03.868480 kubelet[2502]: I1031 00:48:03.842256 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d011812-0c54-49d2-a84d-25c0746a58a0-tigera-ca-bundle\") pod \"calico-kube-controllers-78f5ccdb8f-sfj2g\" (UID: \"7d011812-0c54-49d2-a84d-25c0746a58a0\") " pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" Oct 31 00:48:03.868480 kubelet[2502]: I1031 00:48:03.842301 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds9pg\" (UniqueName: \"kubernetes.io/projected/7d011812-0c54-49d2-a84d-25c0746a58a0-kube-api-access-ds9pg\") pod \"calico-kube-controllers-78f5ccdb8f-sfj2g\" (UID: \"7d011812-0c54-49d2-a84d-25c0746a58a0\") " pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" Oct 31 00:48:03.996966 systemd[1]: Created slice kubepods-besteffort-podd615dcdd_9217_4b99_9985_812be6d75b53.slice - libcontainer container kubepods-besteffort-podd615dcdd_9217_4b99_9985_812be6d75b53.slice. Oct 31 00:48:04.002878 systemd[1]: Created slice kubepods-besteffort-podf0ebaf56_bc9f_4f20_80ce_c5c77074a573.slice - libcontainer container kubepods-besteffort-podf0ebaf56_bc9f_4f20_80ce_c5c77074a573.slice. Oct 31 00:48:04.025258 containerd[1459]: time="2025-10-31T00:48:04.025207304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cznnv,Uid:d615dcdd-9217-4b99-9985-812be6d75b53,Namespace:calico-system,Attempt:0,}" Oct 31 00:48:04.043006 kubelet[2502]: I1031 00:48:04.042950 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f0ebaf56-bc9f-4f20-80ce-c5c77074a573-calico-apiserver-certs\") pod \"calico-apiserver-7cf7fddbf6-nr7tc\" (UID: \"f0ebaf56-bc9f-4f20-80ce-c5c77074a573\") " pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" Oct 31 00:48:04.043609 kubelet[2502]: I1031 00:48:04.043020 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr8xm\" (UniqueName: \"kubernetes.io/projected/f0ebaf56-bc9f-4f20-80ce-c5c77074a573-kube-api-access-rr8xm\") pod \"calico-apiserver-7cf7fddbf6-nr7tc\" (UID: \"f0ebaf56-bc9f-4f20-80ce-c5c77074a573\") " pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" Oct 31 00:48:04.169643 containerd[1459]: time="2025-10-31T00:48:04.169585895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f5ccdb8f-sfj2g,Uid:7d011812-0c54-49d2-a84d-25c0746a58a0,Namespace:calico-system,Attempt:0,}" Oct 31 00:48:04.258651 systemd[1]: Created slice kubepods-besteffort-pod4a3f6669_a62a_42ec_9a82_372bbb7049fb.slice - libcontainer container kubepods-besteffort-pod4a3f6669_a62a_42ec_9a82_372bbb7049fb.slice. Oct 31 00:48:04.285599 systemd[1]: Created slice kubepods-besteffort-pod9fc98e4f_8668_4337_afe0_a221fca95b05.slice - libcontainer container kubepods-besteffort-pod9fc98e4f_8668_4337_afe0_a221fca95b05.slice. Oct 31 00:48:04.306680 containerd[1459]: time="2025-10-31T00:48:04.306625839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf7fddbf6-nr7tc,Uid:f0ebaf56-bc9f-4f20-80ce-c5c77074a573,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:48:04.310390 kubelet[2502]: E1031 00:48:04.309236 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:04.312463 containerd[1459]: time="2025-10-31T00:48:04.312427576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 00:48:04.329332 systemd[1]: Created slice kubepods-burstable-pod4dc3fd13_e452_47cf_9e08_a4a9c785070b.slice - libcontainer container kubepods-burstable-pod4dc3fd13_e452_47cf_9e08_a4a9c785070b.slice. Oct 31 00:48:04.336776 systemd[1]: Created slice kubepods-burstable-podf5a896d8_63fb_485d_b0d7_8486be09050d.slice - libcontainer container kubepods-burstable-podf5a896d8_63fb_485d_b0d7_8486be09050d.slice. Oct 31 00:48:04.343483 systemd[1]: Created slice kubepods-besteffort-poda6a2171a_de8b_4154_86b8_cb6aefca8e5b.slice - libcontainer container kubepods-besteffort-poda6a2171a_de8b_4154_86b8_cb6aefca8e5b.slice. Oct 31 00:48:04.345791 kubelet[2502]: I1031 00:48:04.345740 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc98e4f-8668-4337-afe0-a221fca95b05-whisker-ca-bundle\") pod \"whisker-5ddbb6d7b7-2kd7r\" (UID: \"9fc98e4f-8668-4337-afe0-a221fca95b05\") " pod="calico-system/whisker-5ddbb6d7b7-2kd7r" Oct 31 00:48:04.345890 kubelet[2502]: I1031 00:48:04.345813 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a2171a-de8b-4154-86b8-cb6aefca8e5b-config\") pod \"goldmane-666569f655-47d96\" (UID: \"a6a2171a-de8b-4154-86b8-cb6aefca8e5b\") " pod="calico-system/goldmane-666569f655-47d96" Oct 31 00:48:04.345890 kubelet[2502]: I1031 00:48:04.345848 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxbg\" (UniqueName: \"kubernetes.io/projected/4dc3fd13-e452-47cf-9e08-a4a9c785070b-kube-api-access-wjxbg\") pod \"coredns-674b8bbfcf-h9lbq\" (UID: \"4dc3fd13-e452-47cf-9e08-a4a9c785070b\") " pod="kube-system/coredns-674b8bbfcf-h9lbq" Oct 31 00:48:04.345890 kubelet[2502]: I1031 00:48:04.345873 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6lkb\" (UniqueName: \"kubernetes.io/projected/a6a2171a-de8b-4154-86b8-cb6aefca8e5b-kube-api-access-n6lkb\") pod \"goldmane-666569f655-47d96\" (UID: \"a6a2171a-de8b-4154-86b8-cb6aefca8e5b\") " pod="calico-system/goldmane-666569f655-47d96" Oct 31 00:48:04.345989 kubelet[2502]: I1031 00:48:04.345932 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dc3fd13-e452-47cf-9e08-a4a9c785070b-config-volume\") pod \"coredns-674b8bbfcf-h9lbq\" (UID: \"4dc3fd13-e452-47cf-9e08-a4a9c785070b\") " pod="kube-system/coredns-674b8bbfcf-h9lbq" Oct 31 00:48:04.345989 kubelet[2502]: I1031 00:48:04.345965 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a6a2171a-de8b-4154-86b8-cb6aefca8e5b-goldmane-key-pair\") pod \"goldmane-666569f655-47d96\" (UID: \"a6a2171a-de8b-4154-86b8-cb6aefca8e5b\") " pod="calico-system/goldmane-666569f655-47d96" Oct 31 00:48:04.346059 kubelet[2502]: I1031 00:48:04.346023 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6a2171a-de8b-4154-86b8-cb6aefca8e5b-goldmane-ca-bundle\") pod \"goldmane-666569f655-47d96\" (UID: \"a6a2171a-de8b-4154-86b8-cb6aefca8e5b\") " pod="calico-system/goldmane-666569f655-47d96" Oct 31 00:48:04.346091 kubelet[2502]: I1031 00:48:04.346055 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z6jl\" (UniqueName: \"kubernetes.io/projected/f5a896d8-63fb-485d-b0d7-8486be09050d-kube-api-access-9z6jl\") pod \"coredns-674b8bbfcf-8zcwq\" (UID: \"f5a896d8-63fb-485d-b0d7-8486be09050d\") " pod="kube-system/coredns-674b8bbfcf-8zcwq" Oct 31 00:48:04.346091 kubelet[2502]: I1031 00:48:04.346076 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9fc98e4f-8668-4337-afe0-a221fca95b05-whisker-backend-key-pair\") pod \"whisker-5ddbb6d7b7-2kd7r\" (UID: \"9fc98e4f-8668-4337-afe0-a221fca95b05\") " pod="calico-system/whisker-5ddbb6d7b7-2kd7r" Oct 31 00:48:04.346160 kubelet[2502]: I1031 00:48:04.346150 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5a896d8-63fb-485d-b0d7-8486be09050d-config-volume\") pod \"coredns-674b8bbfcf-8zcwq\" (UID: \"f5a896d8-63fb-485d-b0d7-8486be09050d\") " pod="kube-system/coredns-674b8bbfcf-8zcwq" Oct 31 00:48:04.346182 kubelet[2502]: I1031 00:48:04.346169 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ql48\" (UniqueName: \"kubernetes.io/projected/4a3f6669-a62a-42ec-9a82-372bbb7049fb-kube-api-access-4ql48\") pod \"calico-apiserver-7cf7fddbf6-qfkg6\" (UID: \"4a3f6669-a62a-42ec-9a82-372bbb7049fb\") " pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" Oct 31 00:48:04.346212 kubelet[2502]: I1031 00:48:04.346182 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhn4\" (UniqueName: \"kubernetes.io/projected/9fc98e4f-8668-4337-afe0-a221fca95b05-kube-api-access-shhn4\") pod \"whisker-5ddbb6d7b7-2kd7r\" (UID: \"9fc98e4f-8668-4337-afe0-a221fca95b05\") " pod="calico-system/whisker-5ddbb6d7b7-2kd7r" Oct 31 00:48:04.346212 kubelet[2502]: I1031 00:48:04.346198 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4a3f6669-a62a-42ec-9a82-372bbb7049fb-calico-apiserver-certs\") pod \"calico-apiserver-7cf7fddbf6-qfkg6\" (UID: \"4a3f6669-a62a-42ec-9a82-372bbb7049fb\") " pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" Oct 31 00:48:04.773312 containerd[1459]: time="2025-10-31T00:48:04.773225026Z" level=error msg="Failed to destroy network for sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.778750 containerd[1459]: time="2025-10-31T00:48:04.778698606Z" level=error msg="encountered an error cleaning up failed sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.778834 containerd[1459]: time="2025-10-31T00:48:04.778792412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cznnv,Uid:d615dcdd-9217-4b99-9985-812be6d75b53,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.779142 kubelet[2502]: E1031 00:48:04.779093 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.779218 kubelet[2502]: E1031 00:48:04.779183 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cznnv" Oct 31 00:48:04.779247 kubelet[2502]: E1031 00:48:04.779219 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cznnv" Oct 31 00:48:04.779325 kubelet[2502]: E1031 00:48:04.779291 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cznnv_calico-system(d615dcdd-9217-4b99-9985-812be6d75b53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cznnv_calico-system(d615dcdd-9217-4b99-9985-812be6d75b53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:48:04.863283 containerd[1459]: time="2025-10-31T00:48:04.863224342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf7fddbf6-qfkg6,Uid:4a3f6669-a62a-42ec-9a82-372bbb7049fb,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:48:04.894435 containerd[1459]: time="2025-10-31T00:48:04.891854412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ddbb6d7b7-2kd7r,Uid:9fc98e4f-8668-4337-afe0-a221fca95b05,Namespace:calico-system,Attempt:0,}" Oct 31 00:48:04.923427 containerd[1459]: time="2025-10-31T00:48:04.922127521Z" level=error msg="Failed to destroy network for sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.923427 containerd[1459]: time="2025-10-31T00:48:04.922622742Z" level=error msg="encountered an error cleaning up failed sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.923427 containerd[1459]: time="2025-10-31T00:48:04.922685299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f5ccdb8f-sfj2g,Uid:7d011812-0c54-49d2-a84d-25c0746a58a0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.924521 kubelet[2502]: E1031 00:48:04.924457 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.924583 kubelet[2502]: E1031 00:48:04.924557 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" Oct 31 00:48:04.924608 kubelet[2502]: E1031 00:48:04.924586 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" Oct 31 00:48:04.925003 kubelet[2502]: E1031 00:48:04.924657 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78f5ccdb8f-sfj2g_calico-system(7d011812-0c54-49d2-a84d-25c0746a58a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78f5ccdb8f-sfj2g_calico-system(7d011812-0c54-49d2-a84d-25c0746a58a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" podUID="7d011812-0c54-49d2-a84d-25c0746a58a0" Oct 31 00:48:04.928735 containerd[1459]: time="2025-10-31T00:48:04.928641086Z" level=error msg="Failed to destroy network for sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.929167 containerd[1459]: time="2025-10-31T00:48:04.929127470Z" level=error msg="encountered an error cleaning up failed sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.929216 containerd[1459]: time="2025-10-31T00:48:04.929190960Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf7fddbf6-nr7tc,Uid:f0ebaf56-bc9f-4f20-80ce-c5c77074a573,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.929551 kubelet[2502]: E1031 00:48:04.929485 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.929635 kubelet[2502]: E1031 00:48:04.929602 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" Oct 31 00:48:04.929689 kubelet[2502]: E1031 00:48:04.929637 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" Oct 31 00:48:04.929836 kubelet[2502]: E1031 00:48:04.929792 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf7fddbf6-nr7tc_calico-apiserver(f0ebaf56-bc9f-4f20-80ce-c5c77074a573)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf7fddbf6-nr7tc_calico-apiserver(f0ebaf56-bc9f-4f20-80ce-c5c77074a573)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" podUID="f0ebaf56-bc9f-4f20-80ce-c5c77074a573" Oct 31 00:48:04.933717 kubelet[2502]: E1031 00:48:04.933673 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:04.934266 containerd[1459]: time="2025-10-31T00:48:04.934199925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9lbq,Uid:4dc3fd13-e452-47cf-9e08-a4a9c785070b,Namespace:kube-system,Attempt:0,}" Oct 31 00:48:04.940935 kubelet[2502]: E1031 00:48:04.940532 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:04.941111 containerd[1459]: time="2025-10-31T00:48:04.940983037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8zcwq,Uid:f5a896d8-63fb-485d-b0d7-8486be09050d,Namespace:kube-system,Attempt:0,}" Oct 31 00:48:04.947330 containerd[1459]: time="2025-10-31T00:48:04.947289422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-47d96,Uid:a6a2171a-de8b-4154-86b8-cb6aefca8e5b,Namespace:calico-system,Attempt:0,}" Oct 31 00:48:04.953802 containerd[1459]: time="2025-10-31T00:48:04.953714811Z" level=error msg="Failed to destroy network for sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.954360 containerd[1459]: time="2025-10-31T00:48:04.954310310Z" level=error msg="encountered an error cleaning up failed sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.954444 containerd[1459]: time="2025-10-31T00:48:04.954388928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf7fddbf6-qfkg6,Uid:4a3f6669-a62a-42ec-9a82-372bbb7049fb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.954757 kubelet[2502]: E1031 00:48:04.954709 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:04.954833 kubelet[2502]: E1031 00:48:04.954807 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" Oct 31 00:48:04.954873 kubelet[2502]: E1031 00:48:04.954848 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" Oct 31 00:48:04.954965 kubelet[2502]: E1031 00:48:04.954928 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf7fddbf6-qfkg6_calico-apiserver(4a3f6669-a62a-42ec-9a82-372bbb7049fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf7fddbf6-qfkg6_calico-apiserver(4a3f6669-a62a-42ec-9a82-372bbb7049fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" podUID="4a3f6669-a62a-42ec-9a82-372bbb7049fb" Oct 31 00:48:05.117946 kubelet[2502]: I1031 00:48:05.117259 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:05.117946 kubelet[2502]: I1031 00:48:05.117910 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:05.118593 kubelet[2502]: I1031 00:48:05.118559 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:05.119872 kubelet[2502]: I1031 00:48:05.119836 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:05.120384 containerd[1459]: time="2025-10-31T00:48:05.120355748Z" level=info msg="StopPodSandbox for \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\"" Oct 31 00:48:05.120476 containerd[1459]: time="2025-10-31T00:48:05.120448753Z" level=info msg="StopPodSandbox for \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\"" Oct 31 00:48:05.120632 containerd[1459]: time="2025-10-31T00:48:05.120590078Z" level=info msg="StopPodSandbox for \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\"" Oct 31 00:48:05.121442 containerd[1459]: time="2025-10-31T00:48:05.121415510Z" level=info msg="StopPodSandbox for \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\"" Oct 31 00:48:05.122427 containerd[1459]: time="2025-10-31T00:48:05.122358202Z" level=info msg="Ensure that sandbox 3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0 in task-service has been cleanup successfully" Oct 31 00:48:05.122476 containerd[1459]: time="2025-10-31T00:48:05.122378219Z" level=info msg="Ensure that sandbox 4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3 in task-service has been cleanup successfully" Oct 31 00:48:05.122499 containerd[1459]: time="2025-10-31T00:48:05.122383860Z" level=info msg="Ensure that sandbox 3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea in task-service has been cleanup successfully" Oct 31 00:48:05.122639 containerd[1459]: time="2025-10-31T00:48:05.122369263Z" level=info msg="Ensure that sandbox 1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989 in task-service has been cleanup successfully" Oct 31 00:48:05.157695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea-shm.mount: Deactivated successfully. Oct 31 00:48:05.163677 containerd[1459]: time="2025-10-31T00:48:05.163618644Z" level=error msg="StopPodSandbox for \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\" failed" error="failed to destroy network for sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.163949 kubelet[2502]: E1031 00:48:05.163908 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:05.164037 kubelet[2502]: E1031 00:48:05.163977 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989"} Oct 31 00:48:05.164076 kubelet[2502]: E1031 00:48:05.164058 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d011812-0c54-49d2-a84d-25c0746a58a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:48:05.164150 kubelet[2502]: E1031 00:48:05.164090 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d011812-0c54-49d2-a84d-25c0746a58a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" podUID="7d011812-0c54-49d2-a84d-25c0746a58a0" Oct 31 00:48:05.169645 containerd[1459]: time="2025-10-31T00:48:05.169596239Z" level=error msg="StopPodSandbox for \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\" failed" error="failed to destroy network for sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.169914 kubelet[2502]: E1031 00:48:05.169873 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:05.169973 kubelet[2502]: E1031 00:48:05.169944 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0"} Oct 31 00:48:05.170004 kubelet[2502]: E1031 00:48:05.169989 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a3f6669-a62a-42ec-9a82-372bbb7049fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:48:05.170069 kubelet[2502]: E1031 00:48:05.170021 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a3f6669-a62a-42ec-9a82-372bbb7049fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" podUID="4a3f6669-a62a-42ec-9a82-372bbb7049fb" Oct 31 00:48:05.174945 containerd[1459]: time="2025-10-31T00:48:05.174872986Z" level=error msg="StopPodSandbox for \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\" failed" error="failed to destroy network for sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.175508 kubelet[2502]: E1031 00:48:05.175346 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:05.175508 kubelet[2502]: E1031 00:48:05.175412 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea"} Oct 31 00:48:05.175508 kubelet[2502]: E1031 00:48:05.175452 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d615dcdd-9217-4b99-9985-812be6d75b53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:48:05.175508 kubelet[2502]: E1031 00:48:05.175477 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d615dcdd-9217-4b99-9985-812be6d75b53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:48:05.181697 containerd[1459]: time="2025-10-31T00:48:05.181641809Z" level=error msg="StopPodSandbox for \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\" failed" error="failed to destroy network for sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.182000 kubelet[2502]: E1031 00:48:05.181952 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:05.182057 kubelet[2502]: E1031 00:48:05.182009 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3"} Oct 31 00:48:05.182057 kubelet[2502]: E1031 00:48:05.182045 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0ebaf56-bc9f-4f20-80ce-c5c77074a573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:48:05.182146 kubelet[2502]: E1031 00:48:05.182069 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0ebaf56-bc9f-4f20-80ce-c5c77074a573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" podUID="f0ebaf56-bc9f-4f20-80ce-c5c77074a573" Oct 31 00:48:05.461313 containerd[1459]: time="2025-10-31T00:48:05.460496857Z" level=error msg="Failed to destroy network for sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.463422 containerd[1459]: time="2025-10-31T00:48:05.463356411Z" level=error msg="encountered an error cleaning up failed sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.463555 containerd[1459]: time="2025-10-31T00:48:05.463462111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ddbb6d7b7-2kd7r,Uid:9fc98e4f-8668-4337-afe0-a221fca95b05,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.463831 kubelet[2502]: E1031 00:48:05.463788 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.463899 kubelet[2502]: E1031 00:48:05.463867 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ddbb6d7b7-2kd7r" Oct 31 00:48:05.463929 kubelet[2502]: E1031 00:48:05.463900 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ddbb6d7b7-2kd7r" Oct 31 00:48:05.464028 kubelet[2502]: E1031 00:48:05.463976 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5ddbb6d7b7-2kd7r_calico-system(9fc98e4f-8668-4337-afe0-a221fca95b05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5ddbb6d7b7-2kd7r_calico-system(9fc98e4f-8668-4337-afe0-a221fca95b05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ddbb6d7b7-2kd7r" podUID="9fc98e4f-8668-4337-afe0-a221fca95b05" Oct 31 00:48:05.465261 containerd[1459]: time="2025-10-31T00:48:05.465125708Z" level=error msg="Failed to destroy network for sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.465920 containerd[1459]: time="2025-10-31T00:48:05.465622862Z" level=error msg="encountered an error cleaning up failed sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.466321 containerd[1459]: time="2025-10-31T00:48:05.466032572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-47d96,Uid:a6a2171a-de8b-4154-86b8-cb6aefca8e5b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.466773 containerd[1459]: time="2025-10-31T00:48:05.466272884Z" level=error msg="Failed to destroy network for sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.466843 kubelet[2502]: E1031 00:48:05.466786 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.466912 kubelet[2502]: E1031 00:48:05.466874 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-47d96" Oct 31 00:48:05.466949 kubelet[2502]: E1031 00:48:05.466915 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-47d96" Oct 31 00:48:05.467075 kubelet[2502]: E1031 00:48:05.466979 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-47d96_calico-system(a6a2171a-de8b-4154-86b8-cb6aefca8e5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-47d96_calico-system(a6a2171a-de8b-4154-86b8-cb6aefca8e5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-47d96" podUID="a6a2171a-de8b-4154-86b8-cb6aefca8e5b" Oct 31 00:48:05.467491 containerd[1459]: time="2025-10-31T00:48:05.467456899Z" level=error msg="encountered an error cleaning up failed sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.468543 containerd[1459]: time="2025-10-31T00:48:05.467499529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9lbq,Uid:4dc3fd13-e452-47cf-9e08-a4a9c785070b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.468691 kubelet[2502]: E1031 00:48:05.467645 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.468691 kubelet[2502]: E1031 00:48:05.467681 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h9lbq" Oct 31 00:48:05.468691 kubelet[2502]: E1031 00:48:05.467703 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h9lbq" Oct 31 00:48:05.469010 kubelet[2502]: E1031 00:48:05.467772 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h9lbq_kube-system(4dc3fd13-e452-47cf-9e08-a4a9c785070b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h9lbq_kube-system(4dc3fd13-e452-47cf-9e08-a4a9c785070b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h9lbq" podUID="4dc3fd13-e452-47cf-9e08-a4a9c785070b" Oct 31 00:48:05.472873 containerd[1459]: time="2025-10-31T00:48:05.472829847Z" level=error msg="Failed to destroy network for sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.473210 containerd[1459]: time="2025-10-31T00:48:05.473175959Z" level=error msg="encountered an error cleaning up failed sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.473327 containerd[1459]: time="2025-10-31T00:48:05.473224750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8zcwq,Uid:f5a896d8-63fb-485d-b0d7-8486be09050d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.473465 kubelet[2502]: E1031 00:48:05.473430 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:05.473537 kubelet[2502]: E1031 00:48:05.473486 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8zcwq" Oct 31 00:48:05.473537 kubelet[2502]: E1031 00:48:05.473511 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8zcwq" Oct 31 00:48:05.473604 kubelet[2502]: E1031 00:48:05.473562 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8zcwq_kube-system(f5a896d8-63fb-485d-b0d7-8486be09050d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8zcwq_kube-system(f5a896d8-63fb-485d-b0d7-8486be09050d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8zcwq" podUID="f5a896d8-63fb-485d-b0d7-8486be09050d" Oct 31 00:48:06.123253 kubelet[2502]: I1031 00:48:06.123193 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:06.124166 containerd[1459]: time="2025-10-31T00:48:06.124054609Z" level=info msg="StopPodSandbox for \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\"" Oct 31 00:48:06.124727 containerd[1459]: time="2025-10-31T00:48:06.124235680Z" level=info msg="Ensure that sandbox 08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2 in task-service has been cleanup successfully" Oct 31 00:48:06.124804 kubelet[2502]: I1031 00:48:06.124286 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:06.125282 containerd[1459]: time="2025-10-31T00:48:06.125240829Z" level=info msg="StopPodSandbox for \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\"" Oct 31 00:48:06.125873 containerd[1459]: time="2025-10-31T00:48:06.125544429Z" level=info msg="Ensure that sandbox fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493 in task-service has been cleanup successfully" Oct 31 00:48:06.126274 kubelet[2502]: I1031 00:48:06.126245 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:06.126807 containerd[1459]: time="2025-10-31T00:48:06.126772417Z" level=info msg="StopPodSandbox for \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\"" Oct 31 00:48:06.126937 containerd[1459]: time="2025-10-31T00:48:06.126923562Z" level=info msg="Ensure that sandbox 1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc in task-service has been cleanup successfully" Oct 31 00:48:06.128753 kubelet[2502]: I1031 00:48:06.128578 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:06.129233 containerd[1459]: time="2025-10-31T00:48:06.129201623Z" level=info msg="StopPodSandbox for \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\"" Oct 31 00:48:06.129960 containerd[1459]: time="2025-10-31T00:48:06.129714767Z" level=info msg="Ensure that sandbox b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d in task-service has been cleanup successfully" Oct 31 00:48:06.152182 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2-shm.mount: Deactivated successfully. Oct 31 00:48:06.152308 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493-shm.mount: Deactivated successfully. Oct 31 00:48:06.159590 containerd[1459]: time="2025-10-31T00:48:06.159515372Z" level=error msg="StopPodSandbox for \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\" failed" error="failed to destroy network for sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:06.159929 kubelet[2502]: E1031 00:48:06.159858 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:06.159993 kubelet[2502]: E1031 00:48:06.159943 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2"} Oct 31 00:48:06.160024 kubelet[2502]: E1031 00:48:06.159995 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6a2171a-de8b-4154-86b8-cb6aefca8e5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:48:06.160149 kubelet[2502]: E1031 00:48:06.160031 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6a2171a-de8b-4154-86b8-cb6aefca8e5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-47d96" podUID="a6a2171a-de8b-4154-86b8-cb6aefca8e5b" Oct 31 00:48:06.173578 containerd[1459]: time="2025-10-31T00:48:06.173478122Z" level=error msg="StopPodSandbox for \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\" failed" error="failed to destroy network for sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:06.173875 kubelet[2502]: E1031 00:48:06.173821 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:06.173966 kubelet[2502]: E1031 00:48:06.173881 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d"} Oct 31 00:48:06.173966 kubelet[2502]: E1031 00:48:06.173918 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5a896d8-63fb-485d-b0d7-8486be09050d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:48:06.173966 kubelet[2502]: E1031 00:48:06.173944 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5a896d8-63fb-485d-b0d7-8486be09050d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8zcwq" podUID="f5a896d8-63fb-485d-b0d7-8486be09050d" Oct 31 00:48:06.175680 containerd[1459]: time="2025-10-31T00:48:06.175629795Z" level=error msg="StopPodSandbox for \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\" failed" error="failed to destroy network for sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:06.175880 kubelet[2502]: E1031 00:48:06.175845 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:06.175934 kubelet[2502]: E1031 00:48:06.175890 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc"} Oct 31 00:48:06.175934 kubelet[2502]: E1031 00:48:06.175923 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4dc3fd13-e452-47cf-9e08-a4a9c785070b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:48:06.176038 kubelet[2502]: E1031 00:48:06.175950 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4dc3fd13-e452-47cf-9e08-a4a9c785070b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h9lbq" podUID="4dc3fd13-e452-47cf-9e08-a4a9c785070b" Oct 31 00:48:06.181662 containerd[1459]: time="2025-10-31T00:48:06.181596689Z" level=error msg="StopPodSandbox for \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\" failed" error="failed to destroy network for sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:48:06.182013 kubelet[2502]: E1031 00:48:06.181939 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:06.182067 kubelet[2502]: E1031 00:48:06.182025 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493"} Oct 31 00:48:06.182127 kubelet[2502]: E1031 00:48:06.182067 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9fc98e4f-8668-4337-afe0-a221fca95b05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:48:06.182127 kubelet[2502]: E1031 00:48:06.182096 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9fc98e4f-8668-4337-afe0-a221fca95b05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ddbb6d7b7-2kd7r" podUID="9fc98e4f-8668-4337-afe0-a221fca95b05" Oct 31 00:48:08.741727 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:53536.service - OpenSSH per-connection server daemon (10.0.0.1:53536). Oct 31 00:48:08.791305 sshd[3770]: Accepted publickey for core from 10.0.0.1 port 53536 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:08.793248 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:08.798818 systemd-logind[1449]: New session 8 of user core. Oct 31 00:48:08.804726 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 00:48:08.957006 sshd[3770]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:08.962204 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:53536.service: Deactivated successfully. Oct 31 00:48:08.964542 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 00:48:08.965355 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Oct 31 00:48:08.966587 systemd-logind[1449]: Removed session 8. Oct 31 00:48:12.695261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount719675240.mount: Deactivated successfully. Oct 31 00:48:13.614267 containerd[1459]: time="2025-10-31T00:48:13.614205942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:48:13.615294 containerd[1459]: time="2025-10-31T00:48:13.615247419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 00:48:13.616799 containerd[1459]: time="2025-10-31T00:48:13.616730825Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:48:13.618853 containerd[1459]: time="2025-10-31T00:48:13.618812323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:48:13.619383 containerd[1459]: time="2025-10-31T00:48:13.619338591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.305882722s" Oct 31 00:48:13.619453 containerd[1459]: time="2025-10-31T00:48:13.619378917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 00:48:13.634495 containerd[1459]: time="2025-10-31T00:48:13.634446772Z" level=info msg="CreateContainer within sandbox \"23442d4fa5084966ac8b3c8f1705f1d7e5c8d96def7d9efba47b283cd1cd8d6e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 00:48:13.671339 containerd[1459]: time="2025-10-31T00:48:13.671243742Z" level=info msg="CreateContainer within sandbox \"23442d4fa5084966ac8b3c8f1705f1d7e5c8d96def7d9efba47b283cd1cd8d6e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b50136bd7992e91fbfe35ff87b9d89bf746c2d9ed1062f791a70dbac3db88e40\"" Oct 31 00:48:13.673262 containerd[1459]: time="2025-10-31T00:48:13.672068832Z" level=info msg="StartContainer for \"b50136bd7992e91fbfe35ff87b9d89bf746c2d9ed1062f791a70dbac3db88e40\"" Oct 31 00:48:13.728580 systemd[1]: Started cri-containerd-b50136bd7992e91fbfe35ff87b9d89bf746c2d9ed1062f791a70dbac3db88e40.scope - libcontainer container b50136bd7992e91fbfe35ff87b9d89bf746c2d9ed1062f791a70dbac3db88e40. Oct 31 00:48:13.986094 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:59574.service - OpenSSH per-connection server daemon (10.0.0.1:59574). Oct 31 00:48:14.147871 containerd[1459]: time="2025-10-31T00:48:14.147821662Z" level=info msg="StartContainer for \"b50136bd7992e91fbfe35ff87b9d89bf746c2d9ed1062f791a70dbac3db88e40\" returns successfully" Oct 31 00:48:14.153152 kubelet[2502]: E1031 00:48:14.151829 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:14.174795 kubelet[2502]: I1031 00:48:14.173973 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v54km" podStartSLOduration=1.835611756 podStartE2EDuration="27.173952859s" podCreationTimestamp="2025-10-31 00:47:47 +0000 UTC" firstStartedPulling="2025-10-31 00:47:48.281846051 +0000 UTC m=+23.821394211" lastFinishedPulling="2025-10-31 00:48:13.620187154 +0000 UTC m=+49.159735314" observedRunningTime="2025-10-31 00:48:14.173658486 +0000 UTC m=+49.713206666" watchObservedRunningTime="2025-10-31 00:48:14.173952859 +0000 UTC m=+49.713501029" Oct 31 00:48:14.193123 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 00:48:14.193259 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 00:48:14.266556 sshd[3830]: Accepted publickey for core from 10.0.0.1 port 59574 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:14.273883 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:14.283887 systemd-logind[1449]: New session 9 of user core. Oct 31 00:48:14.293142 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 00:48:14.312576 containerd[1459]: time="2025-10-31T00:48:14.312495262Z" level=info msg="StopPodSandbox for \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\"" Oct 31 00:48:14.509672 sshd[3830]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:14.519069 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:59574.service: Deactivated successfully. Oct 31 00:48:14.524230 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 00:48:14.527388 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Oct 31 00:48:14.529370 systemd-logind[1449]: Removed session 9. Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.417 [INFO][3880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.418 [INFO][3880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" iface="eth0" netns="/var/run/netns/cni-e0a89f09-dc6e-5fbb-2fc7-090380662cde" Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.418 [INFO][3880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" iface="eth0" netns="/var/run/netns/cni-e0a89f09-dc6e-5fbb-2fc7-090380662cde" Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.421 [INFO][3880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" iface="eth0" netns="/var/run/netns/cni-e0a89f09-dc6e-5fbb-2fc7-090380662cde" Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.421 [INFO][3880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.421 [INFO][3880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.528 [INFO][3898] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" HandleID="k8s-pod-network.fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Workload="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.529 [INFO][3898] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.529 [INFO][3898] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.538 [WARNING][3898] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" HandleID="k8s-pod-network.fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Workload="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.538 [INFO][3898] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" HandleID="k8s-pod-network.fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Workload="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.541 [INFO][3898] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:14.552935 containerd[1459]: 2025-10-31 00:48:14.549 [INFO][3880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:14.558265 containerd[1459]: time="2025-10-31T00:48:14.558222086Z" level=info msg="TearDown network for sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\" successfully" Oct 31 00:48:14.558265 containerd[1459]: time="2025-10-31T00:48:14.558259146Z" level=info msg="StopPodSandbox for \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\" returns successfully" Oct 31 00:48:14.558674 systemd[1]: run-netns-cni\x2de0a89f09\x2ddc6e\x2d5fbb\x2d2fc7\x2d090380662cde.mount: Deactivated successfully. Oct 31 00:48:14.624243 kubelet[2502]: I1031 00:48:14.624177 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc98e4f-8668-4337-afe0-a221fca95b05-whisker-ca-bundle\") pod \"9fc98e4f-8668-4337-afe0-a221fca95b05\" (UID: \"9fc98e4f-8668-4337-afe0-a221fca95b05\") " Oct 31 00:48:14.624243 kubelet[2502]: I1031 00:48:14.624235 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shhn4\" (UniqueName: \"kubernetes.io/projected/9fc98e4f-8668-4337-afe0-a221fca95b05-kube-api-access-shhn4\") pod \"9fc98e4f-8668-4337-afe0-a221fca95b05\" (UID: \"9fc98e4f-8668-4337-afe0-a221fca95b05\") " Oct 31 00:48:14.624478 kubelet[2502]: I1031 00:48:14.624275 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9fc98e4f-8668-4337-afe0-a221fca95b05-whisker-backend-key-pair\") pod \"9fc98e4f-8668-4337-afe0-a221fca95b05\" (UID: \"9fc98e4f-8668-4337-afe0-a221fca95b05\") " Oct 31 00:48:14.624859 kubelet[2502]: I1031 00:48:14.624815 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc98e4f-8668-4337-afe0-a221fca95b05-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9fc98e4f-8668-4337-afe0-a221fca95b05" (UID: "9fc98e4f-8668-4337-afe0-a221fca95b05"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:48:14.633211 kubelet[2502]: I1031 00:48:14.632230 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fc98e4f-8668-4337-afe0-a221fca95b05-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9fc98e4f-8668-4337-afe0-a221fca95b05" (UID: "9fc98e4f-8668-4337-afe0-a221fca95b05"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:48:14.633211 kubelet[2502]: I1031 00:48:14.632459 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fc98e4f-8668-4337-afe0-a221fca95b05-kube-api-access-shhn4" (OuterVolumeSpecName: "kube-api-access-shhn4") pod "9fc98e4f-8668-4337-afe0-a221fca95b05" (UID: "9fc98e4f-8668-4337-afe0-a221fca95b05"). InnerVolumeSpecName "kube-api-access-shhn4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:48:14.632739 systemd[1]: var-lib-kubelet-pods-9fc98e4f\x2d8668\x2d4337\x2dafe0\x2da221fca95b05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dshhn4.mount: Deactivated successfully. Oct 31 00:48:14.632859 systemd[1]: var-lib-kubelet-pods-9fc98e4f\x2d8668\x2d4337\x2dafe0\x2da221fca95b05-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 00:48:14.724665 kubelet[2502]: I1031 00:48:14.724616 2502 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9fc98e4f-8668-4337-afe0-a221fca95b05-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 00:48:14.724665 kubelet[2502]: I1031 00:48:14.724649 2502 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fc98e4f-8668-4337-afe0-a221fca95b05-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 00:48:14.724665 kubelet[2502]: I1031 00:48:14.724658 2502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-shhn4\" (UniqueName: \"kubernetes.io/projected/9fc98e4f-8668-4337-afe0-a221fca95b05-kube-api-access-shhn4\") on node \"localhost\" DevicePath \"\"" Oct 31 00:48:15.154825 kubelet[2502]: E1031 00:48:15.154254 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:15.160603 systemd[1]: Removed slice kubepods-besteffort-pod9fc98e4f_8668_4337_afe0_a221fca95b05.slice - libcontainer container kubepods-besteffort-pod9fc98e4f_8668_4337_afe0_a221fca95b05.slice. Oct 31 00:48:15.554905 containerd[1459]: time="2025-10-31T00:48:15.554762023Z" level=info msg="StopPodSandbox for \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\"" Oct 31 00:48:16.554981 containerd[1459]: time="2025-10-31T00:48:16.554920241Z" level=info msg="StopPodSandbox for \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\"" Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:16.598 [INFO][3963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:16.599 [INFO][3963] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" iface="eth0" netns="/var/run/netns/cni-d6b3be2b-e1b6-deab-ed5f-134ba1ef2905" Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:16.603 [INFO][3963] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" iface="eth0" netns="/var/run/netns/cni-d6b3be2b-e1b6-deab-ed5f-134ba1ef2905" Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:16.605 [INFO][3963] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" iface="eth0" netns="/var/run/netns/cni-d6b3be2b-e1b6-deab-ed5f-134ba1ef2905" Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:16.605 [INFO][3963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:16.605 [INFO][3963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:16.628 [INFO][4092] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" HandleID="k8s-pod-network.1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:16.629 [INFO][4092] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:16.629 [INFO][4092] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:17.262 [WARNING][4092] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" HandleID="k8s-pod-network.1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:17.262 [INFO][4092] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" HandleID="k8s-pod-network.1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:17.601 [INFO][4092] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:17.609470 containerd[1459]: 2025-10-31 00:48:17.606 [INFO][3963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:17.611863 containerd[1459]: time="2025-10-31T00:48:17.611017250Z" level=info msg="TearDown network for sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\" successfully" Oct 31 00:48:17.611863 containerd[1459]: time="2025-10-31T00:48:17.611068195Z" level=info msg="StopPodSandbox for \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\" returns successfully" Oct 31 00:48:17.613783 containerd[1459]: time="2025-10-31T00:48:17.613746563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f5ccdb8f-sfj2g,Uid:7d011812-0c54-49d2-a84d-25c0746a58a0,Namespace:calico-system,Attempt:1,}" Oct 31 00:48:17.617294 systemd[1]: run-netns-cni\x2dd6b3be2b\x2de1b6\x2ddeab\x2ded5f\x2d134ba1ef2905.mount: Deactivated successfully. Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.604 [INFO][4083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.605 [INFO][4083] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" iface="eth0" netns="/var/run/netns/cni-4927f254-5c7c-8697-4447-544371d85810" Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.605 [INFO][4083] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" iface="eth0" netns="/var/run/netns/cni-4927f254-5c7c-8697-4447-544371d85810" Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.605 [INFO][4083] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" iface="eth0" netns="/var/run/netns/cni-4927f254-5c7c-8697-4447-544371d85810" Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.605 [INFO][4083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.605 [INFO][4083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.628 [INFO][4122] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" HandleID="k8s-pod-network.08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.628 [INFO][4122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.628 [INFO][4122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.641 [WARNING][4122] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" HandleID="k8s-pod-network.08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.641 [INFO][4122] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" HandleID="k8s-pod-network.08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.644 [INFO][4122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:17.653579 containerd[1459]: 2025-10-31 00:48:17.648 [INFO][4083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:17.654145 containerd[1459]: time="2025-10-31T00:48:17.653760411Z" level=info msg="TearDown network for sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\" successfully" Oct 31 00:48:17.654145 containerd[1459]: time="2025-10-31T00:48:17.653787502Z" level=info msg="StopPodSandbox for \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\" returns successfully" Oct 31 00:48:17.654853 containerd[1459]: time="2025-10-31T00:48:17.654801245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-47d96,Uid:a6a2171a-de8b-4154-86b8-cb6aefca8e5b,Namespace:calico-system,Attempt:1,}" Oct 31 00:48:17.656535 systemd[1]: run-netns-cni\x2d4927f254\x2d5c7c\x2d8697\x2d4447\x2d544371d85810.mount: Deactivated successfully. Oct 31 00:48:18.205475 kernel: bpftool[4140]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 31 00:48:18.458156 systemd[1]: Created slice kubepods-besteffort-podbc7de0b5_fad9_4849_950f_64958f0873ad.slice - libcontainer container kubepods-besteffort-podbc7de0b5_fad9_4849_950f_64958f0873ad.slice. Oct 31 00:48:18.515476 systemd-networkd[1388]: vxlan.calico: Link UP Oct 31 00:48:18.515598 systemd-networkd[1388]: vxlan.calico: Gained carrier Oct 31 00:48:18.550955 kubelet[2502]: I1031 00:48:18.549597 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsjqf\" (UniqueName: \"kubernetes.io/projected/bc7de0b5-fad9-4849-950f-64958f0873ad-kube-api-access-jsjqf\") pod \"whisker-6d455ff89f-sljxb\" (UID: \"bc7de0b5-fad9-4849-950f-64958f0873ad\") " pod="calico-system/whisker-6d455ff89f-sljxb" Oct 31 00:48:18.550955 kubelet[2502]: I1031 00:48:18.549655 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc7de0b5-fad9-4849-950f-64958f0873ad-whisker-ca-bundle\") pod \"whisker-6d455ff89f-sljxb\" (UID: \"bc7de0b5-fad9-4849-950f-64958f0873ad\") " pod="calico-system/whisker-6d455ff89f-sljxb" Oct 31 00:48:18.550955 kubelet[2502]: I1031 00:48:18.549683 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bc7de0b5-fad9-4849-950f-64958f0873ad-whisker-backend-key-pair\") pod \"whisker-6d455ff89f-sljxb\" (UID: \"bc7de0b5-fad9-4849-950f-64958f0873ad\") " pod="calico-system/whisker-6d455ff89f-sljxb" Oct 31 00:48:18.556471 containerd[1459]: time="2025-10-31T00:48:18.556360444Z" level=info msg="StopPodSandbox for \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\"" Oct 31 00:48:18.565732 kubelet[2502]: I1031 00:48:18.565558 2502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fc98e4f-8668-4337-afe0-a221fca95b05" path="/var/lib/kubelet/pods/9fc98e4f-8668-4337-afe0-a221fca95b05/volumes" Oct 31 00:48:18.762641 containerd[1459]: time="2025-10-31T00:48:18.762482041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d455ff89f-sljxb,Uid:bc7de0b5-fad9-4849-950f-64958f0873ad,Namespace:calico-system,Attempt:0,}" Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.878 [INFO][4194] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.878 [INFO][4194] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" iface="eth0" netns="/var/run/netns/cni-be65cf5d-8766-ea0c-d41b-ec65ac706396" Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.879 [INFO][4194] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" iface="eth0" netns="/var/run/netns/cni-be65cf5d-8766-ea0c-d41b-ec65ac706396" Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.879 [INFO][4194] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" iface="eth0" netns="/var/run/netns/cni-be65cf5d-8766-ea0c-d41b-ec65ac706396" Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.879 [INFO][4194] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.879 [INFO][4194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.904 [INFO][4263] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" HandleID="k8s-pod-network.4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.904 [INFO][4263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.904 [INFO][4263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.931 [WARNING][4263] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" HandleID="k8s-pod-network.4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.931 [INFO][4263] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" HandleID="k8s-pod-network.4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.939 [INFO][4263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:18.953827 containerd[1459]: 2025-10-31 00:48:18.945 [INFO][4194] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:18.961299 containerd[1459]: time="2025-10-31T00:48:18.958787141Z" level=info msg="TearDown network for sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\" successfully" Oct 31 00:48:18.961299 containerd[1459]: time="2025-10-31T00:48:18.958930711Z" level=info msg="StopPodSandbox for \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\" returns successfully" Oct 31 00:48:18.960690 systemd[1]: run-netns-cni\x2dbe65cf5d\x2d8766\x2dea0c\x2dd41b\x2dec65ac706396.mount: Deactivated successfully. Oct 31 00:48:18.962900 containerd[1459]: time="2025-10-31T00:48:18.962862341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf7fddbf6-nr7tc,Uid:f0ebaf56-bc9f-4f20-80ce-c5c77074a573,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:48:19.174203 systemd-networkd[1388]: cali2e77ee5b79b: Link UP Oct 31 00:48:19.176142 systemd-networkd[1388]: cali2e77ee5b79b: Gained carrier Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:18.734 [INFO][4202] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0 calico-kube-controllers-78f5ccdb8f- calico-system 7d011812-0c54-49d2-a84d-25c0746a58a0 991 0 2025-10-31 00:47:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78f5ccdb8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78f5ccdb8f-sfj2g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2e77ee5b79b [] [] }} ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Namespace="calico-system" Pod="calico-kube-controllers-78f5ccdb8f-sfj2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:18.735 [INFO][4202] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Namespace="calico-system" Pod="calico-kube-controllers-78f5ccdb8f-sfj2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:18.960 [INFO][4271] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" HandleID="k8s-pod-network.d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:18.960 [INFO][4271] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" HandleID="k8s-pod-network.d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000586a90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78f5ccdb8f-sfj2g", "timestamp":"2025-10-31 00:48:18.960194623 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:18.960 [INFO][4271] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:18.960 [INFO][4271] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:18.961 [INFO][4271] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:18.970 [INFO][4271] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" host="localhost" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.035 [INFO][4271] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.043 [INFO][4271] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.048 [INFO][4271] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.052 [INFO][4271] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.052 [INFO][4271] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" host="localhost" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.054 [INFO][4271] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954 Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.103 [INFO][4271] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" host="localhost" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.162 [INFO][4271] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" host="localhost" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.162 [INFO][4271] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" host="localhost" Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.162 [INFO][4271] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:19.217982 containerd[1459]: 2025-10-31 00:48:19.162 [INFO][4271] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" HandleID="k8s-pod-network.d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:19.219912 containerd[1459]: 2025-10-31 00:48:19.167 [INFO][4202] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Namespace="calico-system" Pod="calico-kube-controllers-78f5ccdb8f-sfj2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0", GenerateName:"calico-kube-controllers-78f5ccdb8f-", Namespace:"calico-system", SelfLink:"", UID:"7d011812-0c54-49d2-a84d-25c0746a58a0", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f5ccdb8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78f5ccdb8f-sfj2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e77ee5b79b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:19.219912 containerd[1459]: 2025-10-31 00:48:19.168 [INFO][4202] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Namespace="calico-system" Pod="calico-kube-controllers-78f5ccdb8f-sfj2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:19.219912 containerd[1459]: 2025-10-31 00:48:19.168 [INFO][4202] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e77ee5b79b ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Namespace="calico-system" Pod="calico-kube-controllers-78f5ccdb8f-sfj2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:19.219912 containerd[1459]: 2025-10-31 00:48:19.175 [INFO][4202] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Namespace="calico-system" Pod="calico-kube-controllers-78f5ccdb8f-sfj2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:19.219912 containerd[1459]: 2025-10-31 00:48:19.175 [INFO][4202] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Namespace="calico-system" Pod="calico-kube-controllers-78f5ccdb8f-sfj2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0", GenerateName:"calico-kube-controllers-78f5ccdb8f-", Namespace:"calico-system", SelfLink:"", UID:"7d011812-0c54-49d2-a84d-25c0746a58a0", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f5ccdb8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954", Pod:"calico-kube-controllers-78f5ccdb8f-sfj2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e77ee5b79b", MAC:"46:10:d8:06:55:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:19.219912 containerd[1459]: 2025-10-31 00:48:19.210 [INFO][4202] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954" Namespace="calico-system" Pod="calico-kube-controllers-78f5ccdb8f-sfj2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:19.315771 systemd-networkd[1388]: cali4ac12189205: Link UP Oct 31 00:48:19.317803 systemd-networkd[1388]: cali4ac12189205: Gained carrier Oct 31 00:48:19.325566 containerd[1459]: time="2025-10-31T00:48:19.325342986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:48:19.327013 containerd[1459]: time="2025-10-31T00:48:19.326646642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:48:19.327013 containerd[1459]: time="2025-10-31T00:48:19.326715131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:19.327013 containerd[1459]: time="2025-10-31T00:48:19.326853430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:19.354598 systemd[1]: Started cri-containerd-d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954.scope - libcontainer container d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954. Oct 31 00:48:19.370704 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:48:19.411927 containerd[1459]: time="2025-10-31T00:48:19.411853291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f5ccdb8f-sfj2g,Uid:7d011812-0c54-49d2-a84d-25c0746a58a0,Namespace:calico-system,Attempt:1,} returns sandbox id \"d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954\"" Oct 31 00:48:19.414565 containerd[1459]: time="2025-10-31T00:48:19.414539012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:18.916 [INFO][4218] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--47d96-eth0 goldmane-666569f655- calico-system a6a2171a-de8b-4154-86b8-cb6aefca8e5b 1002 0 2025-10-31 00:47:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-47d96 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4ac12189205 [] [] }} ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Namespace="calico-system" Pod="goldmane-666569f655-47d96" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47d96-" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:18.916 [INFO][4218] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Namespace="calico-system" Pod="goldmane-666569f655-47d96" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:18.979 [INFO][4277] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" HandleID="k8s-pod-network.3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:18.979 [INFO][4277] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" HandleID="k8s-pod-network.3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-47d96", "timestamp":"2025-10-31 00:48:18.979267958 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:18.979 [INFO][4277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.163 [INFO][4277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.163 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.171 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" host="localhost" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.178 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.183 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.210 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.214 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.214 [INFO][4277] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" host="localhost" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.216 [INFO][4277] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5 Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.275 [INFO][4277] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" host="localhost" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.296 [INFO][4277] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" host="localhost" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.297 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" host="localhost" Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.297 [INFO][4277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:19.484517 containerd[1459]: 2025-10-31 00:48:19.297 [INFO][4277] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" HandleID="k8s-pod-network.3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:19.486070 containerd[1459]: 2025-10-31 00:48:19.306 [INFO][4218] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Namespace="calico-system" Pod="goldmane-666569f655-47d96" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47d96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--47d96-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a6a2171a-de8b-4154-86b8-cb6aefca8e5b", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-47d96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ac12189205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:19.486070 containerd[1459]: 2025-10-31 00:48:19.307 [INFO][4218] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Namespace="calico-system" Pod="goldmane-666569f655-47d96" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:19.486070 containerd[1459]: 2025-10-31 00:48:19.307 [INFO][4218] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ac12189205 ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Namespace="calico-system" Pod="goldmane-666569f655-47d96" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:19.486070 containerd[1459]: 2025-10-31 00:48:19.318 [INFO][4218] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Namespace="calico-system" Pod="goldmane-666569f655-47d96" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:19.486070 containerd[1459]: 2025-10-31 00:48:19.318 [INFO][4218] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Namespace="calico-system" Pod="goldmane-666569f655-47d96" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47d96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--47d96-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a6a2171a-de8b-4154-86b8-cb6aefca8e5b", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5", Pod:"goldmane-666569f655-47d96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ac12189205", MAC:"32:1e:12:dc:b7:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:19.486070 containerd[1459]: 2025-10-31 00:48:19.480 [INFO][4218] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5" Namespace="calico-system" Pod="goldmane-666569f655-47d96" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:19.530834 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:59588.service - OpenSSH per-connection server daemon (10.0.0.1:59588). Oct 31 00:48:19.560940 containerd[1459]: time="2025-10-31T00:48:19.559201122Z" level=info msg="StopPodSandbox for \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\"" Oct 31 00:48:19.561222 containerd[1459]: time="2025-10-31T00:48:19.561193141Z" level=info msg="StopPodSandbox for \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\"" Oct 31 00:48:19.561749 containerd[1459]: time="2025-10-31T00:48:19.561620273Z" level=info msg="StopPodSandbox for \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\"" Oct 31 00:48:19.614466 containerd[1459]: time="2025-10-31T00:48:19.614181334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:48:19.614466 containerd[1459]: time="2025-10-31T00:48:19.614310907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:48:19.614466 containerd[1459]: time="2025-10-31T00:48:19.614328410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:19.614700 containerd[1459]: time="2025-10-31T00:48:19.614476157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:19.641137 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 59588 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:19.645797 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:19.660312 systemd-networkd[1388]: calif72c24a3ed4: Link UP Oct 31 00:48:19.670986 systemd-networkd[1388]: calif72c24a3ed4: Gained carrier Oct 31 00:48:19.694637 systemd[1]: Started cri-containerd-3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5.scope - libcontainer container 3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5. Oct 31 00:48:19.699554 systemd-logind[1449]: New session 10 of user core. Oct 31 00:48:19.699685 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 00:48:19.718495 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:48:19.775535 containerd[1459]: time="2025-10-31T00:48:19.773965649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-47d96,Uid:a6a2171a-de8b-4154-86b8-cb6aefca8e5b,Namespace:calico-system,Attempt:1,} returns sandbox id \"3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5\"" Oct 31 00:48:19.961720 containerd[1459]: time="2025-10-31T00:48:19.961651177Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:19.987656 sshd[4391]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:19.992334 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:59588.service: Deactivated successfully. Oct 31 00:48:19.997283 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.298 [INFO][4290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6d455ff89f--sljxb-eth0 whisker-6d455ff89f- calico-system bc7de0b5-fad9-4849-950f-64958f0873ad 1018 0 2025-10-31 00:48:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d455ff89f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6d455ff89f-sljxb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif72c24a3ed4 [] [] }} ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Namespace="calico-system" Pod="whisker-6d455ff89f-sljxb" WorkloadEndpoint="localhost-k8s-whisker--6d455ff89f--sljxb-" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.299 [INFO][4290] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Namespace="calico-system" Pod="whisker-6d455ff89f-sljxb" WorkloadEndpoint="localhost-k8s-whisker--6d455ff89f--sljxb-eth0" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.339 [INFO][4329] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" HandleID="k8s-pod-network.65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Workload="localhost-k8s-whisker--6d455ff89f--sljxb-eth0" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.340 [INFO][4329] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" HandleID="k8s-pod-network.65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Workload="localhost-k8s-whisker--6d455ff89f--sljxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ac820), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6d455ff89f-sljxb", "timestamp":"2025-10-31 00:48:19.339672332 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.340 [INFO][4329] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.340 [INFO][4329] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.340 [INFO][4329] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.477 [INFO][4329] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" host="localhost" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.540 [INFO][4329] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.547 [INFO][4329] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.549 [INFO][4329] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.551 [INFO][4329] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.552 [INFO][4329] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" host="localhost" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.556 [INFO][4329] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.579 [INFO][4329] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" host="localhost" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.599 [INFO][4329] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" host="localhost" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.599 [INFO][4329] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" host="localhost" Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.599 [INFO][4329] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:19.998767 containerd[1459]: 2025-10-31 00:48:19.599 [INFO][4329] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" HandleID="k8s-pod-network.65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Workload="localhost-k8s-whisker--6d455ff89f--sljxb-eth0" Oct 31 00:48:19.999533 containerd[1459]: 2025-10-31 00:48:19.627 [INFO][4290] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Namespace="calico-system" Pod="whisker-6d455ff89f-sljxb" WorkloadEndpoint="localhost-k8s-whisker--6d455ff89f--sljxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d455ff89f--sljxb-eth0", GenerateName:"whisker-6d455ff89f-", Namespace:"calico-system", SelfLink:"", UID:"bc7de0b5-fad9-4849-950f-64958f0873ad", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 48, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d455ff89f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6d455ff89f-sljxb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif72c24a3ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:19.999533 containerd[1459]: 2025-10-31 00:48:19.627 [INFO][4290] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Namespace="calico-system" Pod="whisker-6d455ff89f-sljxb" WorkloadEndpoint="localhost-k8s-whisker--6d455ff89f--sljxb-eth0" Oct 31 00:48:19.999533 containerd[1459]: 2025-10-31 00:48:19.627 [INFO][4290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif72c24a3ed4 ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Namespace="calico-system" Pod="whisker-6d455ff89f-sljxb" WorkloadEndpoint="localhost-k8s-whisker--6d455ff89f--sljxb-eth0" Oct 31 00:48:19.999533 containerd[1459]: 2025-10-31 00:48:19.674 [INFO][4290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Namespace="calico-system" Pod="whisker-6d455ff89f-sljxb" WorkloadEndpoint="localhost-k8s-whisker--6d455ff89f--sljxb-eth0" Oct 31 00:48:19.999533 containerd[1459]: 2025-10-31 00:48:19.676 [INFO][4290] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Namespace="calico-system" Pod="whisker-6d455ff89f-sljxb" WorkloadEndpoint="localhost-k8s-whisker--6d455ff89f--sljxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d455ff89f--sljxb-eth0", GenerateName:"whisker-6d455ff89f-", Namespace:"calico-system", SelfLink:"", UID:"bc7de0b5-fad9-4849-950f-64958f0873ad", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 48, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d455ff89f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f", Pod:"whisker-6d455ff89f-sljxb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif72c24a3ed4", MAC:"36:92:e1:25:99:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:19.999533 containerd[1459]: 2025-10-31 00:48:19.976 [INFO][4290] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f" Namespace="calico-system" Pod="whisker-6d455ff89f-sljxb" WorkloadEndpoint="localhost-k8s-whisker--6d455ff89f--sljxb-eth0" Oct 31 00:48:20.000623 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Oct 31 00:48:20.004416 systemd-logind[1449]: Removed session 10. Oct 31 00:48:20.019220 containerd[1459]: time="2025-10-31T00:48:19.995207083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:48:20.019716 containerd[1459]: time="2025-10-31T00:48:19.995221430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:48:20.019798 kubelet[2502]: E1031 00:48:20.019560 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:48:20.019798 kubelet[2502]: E1031 00:48:20.019633 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:48:20.020954 containerd[1459]: time="2025-10-31T00:48:20.020339346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:48:20.020979 kubelet[2502]: E1031 00:48:20.019935 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ds9pg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78f5ccdb8f-sfj2g_calico-system(7d011812-0c54-49d2-a84d-25c0746a58a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:20.022828 kubelet[2502]: E1031 00:48:20.021716 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" podUID="7d011812-0c54-49d2-a84d-25c0746a58a0" Oct 31 00:48:20.074725 containerd[1459]: time="2025-10-31T00:48:20.074435913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:48:20.074725 containerd[1459]: time="2025-10-31T00:48:20.074562420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:48:20.074725 containerd[1459]: time="2025-10-31T00:48:20.074592567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:20.074969 containerd[1459]: time="2025-10-31T00:48:20.074746436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:20.084698 systemd-networkd[1388]: cali59f19d12fea: Link UP Oct 31 00:48:20.086069 systemd-networkd[1388]: cali59f19d12fea: Gained carrier Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:19.669 [INFO][4429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:19.671 [INFO][4429] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" iface="eth0" netns="/var/run/netns/cni-f7677e27-d8b7-01e5-e89b-c6b4e6bc4437" Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:19.671 [INFO][4429] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" iface="eth0" netns="/var/run/netns/cni-f7677e27-d8b7-01e5-e89b-c6b4e6bc4437" Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:19.673 [INFO][4429] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" iface="eth0" netns="/var/run/netns/cni-f7677e27-d8b7-01e5-e89b-c6b4e6bc4437" Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:19.673 [INFO][4429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:19.673 [INFO][4429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:19.716 [INFO][4485] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" HandleID="k8s-pod-network.3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:19.716 [INFO][4485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:20.080 [INFO][4485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:20.096 [WARNING][4485] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" HandleID="k8s-pod-network.3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:20.096 [INFO][4485] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" HandleID="k8s-pod-network.3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:20.104 [INFO][4485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:20.115859 containerd[1459]: 2025-10-31 00:48:20.111 [INFO][4429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:20.122259 containerd[1459]: time="2025-10-31T00:48:20.122212458Z" level=info msg="TearDown network for sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\" successfully" Oct 31 00:48:20.122481 containerd[1459]: time="2025-10-31T00:48:20.122374071Z" level=info msg="StopPodSandbox for \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\" returns successfully" Oct 31 00:48:20.123354 containerd[1459]: time="2025-10-31T00:48:20.123334152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf7fddbf6-qfkg6,Uid:4a3f6669-a62a-42ec-9a82-372bbb7049fb,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:19.529 [INFO][4370] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0 calico-apiserver-7cf7fddbf6- calico-apiserver f0ebaf56-bc9f-4f20-80ce-c5c77074a573 1021 0 2025-10-31 00:47:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf7fddbf6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cf7fddbf6-nr7tc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali59f19d12fea [] [] }} ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-nr7tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:19.530 [INFO][4370] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-nr7tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:19.689 [INFO][4394] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" HandleID="k8s-pod-network.b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:19.690 [INFO][4394] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" HandleID="k8s-pod-network.b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005169d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cf7fddbf6-nr7tc", "timestamp":"2025-10-31 00:48:19.689269549 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:19.690 [INFO][4394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:19.690 [INFO][4394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:19.690 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:19.986 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" host="localhost" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.011 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.018 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.023 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.058 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.058 [INFO][4394] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" host="localhost" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.060 [INFO][4394] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356 Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.068 [INFO][4394] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" host="localhost" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.078 [INFO][4394] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" host="localhost" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.079 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" host="localhost" Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.079 [INFO][4394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:20.127988 containerd[1459]: 2025-10-31 00:48:20.079 [INFO][4394] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" HandleID="k8s-pod-network.b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:20.128681 containerd[1459]: 2025-10-31 00:48:20.082 [INFO][4370] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-nr7tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0", GenerateName:"calico-apiserver-7cf7fddbf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0ebaf56-bc9f-4f20-80ce-c5c77074a573", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf7fddbf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cf7fddbf6-nr7tc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59f19d12fea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:20.128681 containerd[1459]: 2025-10-31 00:48:20.082 [INFO][4370] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-nr7tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:20.128681 containerd[1459]: 2025-10-31 00:48:20.082 [INFO][4370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59f19d12fea ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-nr7tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:20.128681 containerd[1459]: 2025-10-31 00:48:20.085 [INFO][4370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-nr7tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:20.128681 containerd[1459]: 2025-10-31 00:48:20.085 [INFO][4370] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-nr7tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0", GenerateName:"calico-apiserver-7cf7fddbf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0ebaf56-bc9f-4f20-80ce-c5c77074a573", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf7fddbf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356", Pod:"calico-apiserver-7cf7fddbf6-nr7tc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59f19d12fea", MAC:"fe:f2:7d:da:f7:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:20.128681 containerd[1459]: 2025-10-31 00:48:20.118 [INFO][4370] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-nr7tc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:19.948 [INFO][4443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:19.949 [INFO][4443] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" iface="eth0" netns="/var/run/netns/cni-045a9151-fe60-ecb5-96a1-202b4ea51a61" Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:19.949 [INFO][4443] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" iface="eth0" netns="/var/run/netns/cni-045a9151-fe60-ecb5-96a1-202b4ea51a61" Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:19.949 [INFO][4443] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" iface="eth0" netns="/var/run/netns/cni-045a9151-fe60-ecb5-96a1-202b4ea51a61" Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:19.949 [INFO][4443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:19.949 [INFO][4443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:20.002 [INFO][4531] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" HandleID="k8s-pod-network.b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:20.003 [INFO][4531] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:20.104 [INFO][4531] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:20.111 [WARNING][4531] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" HandleID="k8s-pod-network.b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:20.111 [INFO][4531] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" HandleID="k8s-pod-network.b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:20.120 [INFO][4531] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:20.131668 containerd[1459]: 2025-10-31 00:48:20.126 [INFO][4443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:20.132186 containerd[1459]: time="2025-10-31T00:48:20.132106128Z" level=info msg="TearDown network for sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\" successfully" Oct 31 00:48:20.132186 containerd[1459]: time="2025-10-31T00:48:20.132140432Z" level=info msg="StopPodSandbox for \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\" returns successfully" Oct 31 00:48:20.132393 systemd[1]: run-netns-cni\x2df7677e27\x2dd8b7\x2d01e5\x2de89b\x2dc6b4e6bc4437.mount: Deactivated successfully. Oct 31 00:48:20.135518 containerd[1459]: time="2025-10-31T00:48:20.133644655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8zcwq,Uid:f5a896d8-63fb-485d-b0d7-8486be09050d,Namespace:kube-system,Attempt:1,}" Oct 31 00:48:20.135566 kubelet[2502]: E1031 00:48:20.132701 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:20.138125 systemd[1]: run-netns-cni\x2d045a9151\x2dfe60\x2decb5\x2d96a1\x2d202b4ea51a61.mount: Deactivated successfully. Oct 31 00:48:20.148693 systemd[1]: Started cri-containerd-65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f.scope - libcontainer container 65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f. Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.011 [INFO][4444] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.012 [INFO][4444] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" iface="eth0" netns="/var/run/netns/cni-7dfced2a-28e7-f900-a51d-0e090005690f" Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.013 [INFO][4444] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" iface="eth0" netns="/var/run/netns/cni-7dfced2a-28e7-f900-a51d-0e090005690f" Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.014 [INFO][4444] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" iface="eth0" netns="/var/run/netns/cni-7dfced2a-28e7-f900-a51d-0e090005690f" Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.014 [INFO][4444] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.015 [INFO][4444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.057 [INFO][4549] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" HandleID="k8s-pod-network.1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.057 [INFO][4549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.120 [INFO][4549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.144 [WARNING][4549] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" HandleID="k8s-pod-network.1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.144 [INFO][4549] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" HandleID="k8s-pod-network.1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.147 [INFO][4549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:20.164202 containerd[1459]: 2025-10-31 00:48:20.155 [INFO][4444] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:20.165270 containerd[1459]: time="2025-10-31T00:48:20.165186809Z" level=info msg="TearDown network for sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\" successfully" Oct 31 00:48:20.165664 containerd[1459]: time="2025-10-31T00:48:20.165474048Z" level=info msg="StopPodSandbox for \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\" returns successfully" Oct 31 00:48:20.167172 kubelet[2502]: E1031 00:48:20.166841 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:20.168059 containerd[1459]: time="2025-10-31T00:48:20.167991843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9lbq,Uid:4dc3fd13-e452-47cf-9e08-a4a9c785070b,Namespace:kube-system,Attempt:1,}" Oct 31 00:48:20.189174 kubelet[2502]: E1031 00:48:20.189116 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" podUID="7d011812-0c54-49d2-a84d-25c0746a58a0" Oct 31 00:48:20.198989 containerd[1459]: time="2025-10-31T00:48:20.198337692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:48:20.198989 containerd[1459]: time="2025-10-31T00:48:20.198441095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:48:20.198989 containerd[1459]: time="2025-10-31T00:48:20.198457376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:20.207730 containerd[1459]: time="2025-10-31T00:48:20.199303604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:20.213796 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:48:20.227622 systemd[1]: Started cri-containerd-b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356.scope - libcontainer container b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356. Oct 31 00:48:20.244230 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:48:20.290678 containerd[1459]: time="2025-10-31T00:48:20.290441345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf7fddbf6-nr7tc,Uid:f0ebaf56-bc9f-4f20-80ce-c5c77074a573,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356\"" Oct 31 00:48:20.302187 containerd[1459]: time="2025-10-31T00:48:20.302148720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d455ff89f-sljxb,Uid:bc7de0b5-fad9-4849-950f-64958f0873ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"65564874ee285cfc301d2eaff9459d6b679e84a83a98745c19e99440b23bd31f\"" Oct 31 00:48:20.421919 containerd[1459]: time="2025-10-31T00:48:20.421861666Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:20.472867 containerd[1459]: time="2025-10-31T00:48:20.472776752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:48:20.472867 containerd[1459]: time="2025-10-31T00:48:20.472811437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:48:20.473147 kubelet[2502]: E1031 00:48:20.473092 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:48:20.473209 kubelet[2502]: E1031 00:48:20.473154 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:48:20.473547 kubelet[2502]: E1031 00:48:20.473469 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6lkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-47d96_calico-system(a6a2171a-de8b-4154-86b8-cb6aefca8e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:20.473673 containerd[1459]: time="2025-10-31T00:48:20.473536528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:48:20.474992 kubelet[2502]: E1031 00:48:20.474955 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47d96" podUID="a6a2171a-de8b-4154-86b8-cb6aefca8e5b" Oct 31 00:48:20.551190 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Oct 31 00:48:20.560356 containerd[1459]: time="2025-10-31T00:48:20.559991070Z" level=info msg="StopPodSandbox for \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\"" Oct 31 00:48:20.612669 systemd-networkd[1388]: cali4ac12189205: Gained IPv6LL Oct 31 00:48:20.665454 systemd[1]: run-netns-cni\x2d7dfced2a\x2d28e7\x2df900\x2da51d\x2d0e090005690f.mount: Deactivated successfully. Oct 31 00:48:20.804603 systemd-networkd[1388]: cali2e77ee5b79b: Gained IPv6LL Oct 31 00:48:20.852709 containerd[1459]: time="2025-10-31T00:48:20.852637765Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:20.900097 containerd[1459]: time="2025-10-31T00:48:20.900015731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:48:20.900097 containerd[1459]: time="2025-10-31T00:48:20.900057830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:48:20.900454 kubelet[2502]: E1031 00:48:20.900384 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:48:20.900504 kubelet[2502]: E1031 00:48:20.900459 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:48:20.900763 kubelet[2502]: E1031 00:48:20.900695 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr8xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cf7fddbf6-nr7tc_calico-apiserver(f0ebaf56-bc9f-4f20-80ce-c5c77074a573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:20.900989 containerd[1459]: time="2025-10-31T00:48:20.900819069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:48:20.902023 kubelet[2502]: E1031 00:48:20.901960 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" podUID="f0ebaf56-bc9f-4f20-80ce-c5c77074a573" Oct 31 00:48:21.188672 systemd-networkd[1388]: cali59f19d12fea: Gained IPv6LL Oct 31 00:48:21.191880 kubelet[2502]: E1031 00:48:21.191747 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" podUID="f0ebaf56-bc9f-4f20-80ce-c5c77074a573" Oct 31 00:48:21.193641 kubelet[2502]: E1031 00:48:21.193609 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" podUID="7d011812-0c54-49d2-a84d-25c0746a58a0" Oct 31 00:48:21.193739 kubelet[2502]: E1031 00:48:21.193714 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47d96" podUID="a6a2171a-de8b-4154-86b8-cb6aefca8e5b" Oct 31 00:48:21.321369 containerd[1459]: time="2025-10-31T00:48:21.321290604Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:21.444647 systemd-networkd[1388]: calif72c24a3ed4: Gained IPv6LL Oct 31 00:48:21.583681 containerd[1459]: time="2025-10-31T00:48:21.583484255Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:48:21.583681 containerd[1459]: time="2025-10-31T00:48:21.583621753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:48:21.587238 kubelet[2502]: E1031 00:48:21.583830 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:48:21.587238 kubelet[2502]: E1031 00:48:21.583906 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:48:21.587238 kubelet[2502]: E1031 00:48:21.584025 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12242dfb77014928886896da969d1ea0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jsjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d455ff89f-sljxb_calico-system(bc7de0b5-fad9-4849-950f-64958f0873ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:21.589748 containerd[1459]: time="2025-10-31T00:48:21.589703497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:48:21.592110 systemd-networkd[1388]: caliaef1cf7cd63: Link UP Oct 31 00:48:21.593396 systemd-networkd[1388]: caliaef1cf7cd63: Gained carrier Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.250 [INFO][4619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0 coredns-674b8bbfcf- kube-system f5a896d8-63fb-485d-b0d7-8486be09050d 1037 0 2025-10-31 00:47:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-8zcwq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaef1cf7cd63 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zcwq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8zcwq-" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.250 [INFO][4619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zcwq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.294 [INFO][4666] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" HandleID="k8s-pod-network.c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.295 [INFO][4666] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" HandleID="k8s-pod-network.c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033b8c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-8zcwq", "timestamp":"2025-10-31 00:48:20.294344673 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.295 [INFO][4666] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.295 [INFO][4666] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.295 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.494 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" host="localhost" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.551 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.712 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:20.864 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:21.349 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:21.349 [INFO][4666] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" host="localhost" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:21.403 [INFO][4666] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060 Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:21.501 [INFO][4666] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" host="localhost" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:21.573 [INFO][4666] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" host="localhost" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:21.573 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" host="localhost" Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:21.573 [INFO][4666] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:21.737992 containerd[1459]: 2025-10-31 00:48:21.574 [INFO][4666] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" HandleID="k8s-pod-network.c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:21.739225 containerd[1459]: 2025-10-31 00:48:21.585 [INFO][4619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zcwq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f5a896d8-63fb-485d-b0d7-8486be09050d", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-8zcwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaef1cf7cd63", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:21.739225 containerd[1459]: 2025-10-31 00:48:21.588 [INFO][4619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zcwq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:21.739225 containerd[1459]: 2025-10-31 00:48:21.588 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaef1cf7cd63 ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zcwq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:21.739225 containerd[1459]: 2025-10-31 00:48:21.593 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zcwq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:21.739225 containerd[1459]: 2025-10-31 00:48:21.595 [INFO][4619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zcwq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f5a896d8-63fb-485d-b0d7-8486be09050d", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060", Pod:"coredns-674b8bbfcf-8zcwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaef1cf7cd63", MAC:"c6:cf:01:e9:e5:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:21.739225 containerd[1459]: 2025-10-31 00:48:21.733 [INFO][4619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zcwq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:21.789529 containerd[1459]: time="2025-10-31T00:48:21.789391202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:48:21.789529 containerd[1459]: time="2025-10-31T00:48:21.789476061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:48:21.789529 containerd[1459]: time="2025-10-31T00:48:21.789489336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:21.789784 containerd[1459]: time="2025-10-31T00:48:21.789592159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:21.833024 systemd[1]: Started cri-containerd-c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060.scope - libcontainer container c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060. Oct 31 00:48:21.849525 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:48:21.878890 containerd[1459]: time="2025-10-31T00:48:21.878838623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8zcwq,Uid:f5a896d8-63fb-485d-b0d7-8486be09050d,Namespace:kube-system,Attempt:1,} returns sandbox id \"c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060\"" Oct 31 00:48:21.880393 kubelet[2502]: E1031 00:48:21.880328 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:21.891154 containerd[1459]: time="2025-10-31T00:48:21.890695075Z" level=info msg="CreateContainer within sandbox \"c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:48:21.985282 containerd[1459]: time="2025-10-31T00:48:21.985213543Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:22.019723 containerd[1459]: time="2025-10-31T00:48:22.019513117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:48:22.019723 containerd[1459]: time="2025-10-31T00:48:22.019562229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:48:22.019923 kubelet[2502]: E1031 00:48:22.019786 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:48:22.019923 kubelet[2502]: E1031 00:48:22.019857 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:48:22.020063 kubelet[2502]: E1031 00:48:22.020014 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d455ff89f-sljxb_calico-system(bc7de0b5-fad9-4849-950f-64958f0873ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:22.021176 kubelet[2502]: E1031 00:48:22.021122 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d455ff89f-sljxb" podUID="bc7de0b5-fad9-4849-950f-64958f0873ad" Oct 31 00:48:22.024745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218912963.mount: Deactivated successfully. Oct 31 00:48:22.034055 containerd[1459]: time="2025-10-31T00:48:22.033988083Z" level=info msg="CreateContainer within sandbox \"c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f71a2ecd656f39d43f24175d7e6697f4e014549f8a4e7ba5e3ad124cc462022\"" Oct 31 00:48:22.034858 containerd[1459]: time="2025-10-31T00:48:22.034822549Z" level=info msg="StartContainer for \"4f71a2ecd656f39d43f24175d7e6697f4e014549f8a4e7ba5e3ad124cc462022\"" Oct 31 00:48:22.053605 systemd-networkd[1388]: cali16ab57abb90: Link UP Oct 31 00:48:22.057518 systemd-networkd[1388]: cali16ab57abb90: Gained carrier Oct 31 00:48:22.086379 systemd[1]: Started cri-containerd-4f71a2ecd656f39d43f24175d7e6697f4e014549f8a4e7ba5e3ad124cc462022.scope - libcontainer container 4f71a2ecd656f39d43f24175d7e6697f4e014549f8a4e7ba5e3ad124cc462022. Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:20.527 [INFO][4618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0 calico-apiserver-7cf7fddbf6- calico-apiserver 4a3f6669-a62a-42ec-9a82-372bbb7049fb 1036 0 2025-10-31 00:47:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf7fddbf6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cf7fddbf6-qfkg6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali16ab57abb90 [] [] }} ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-qfkg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:20.527 [INFO][4618] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-qfkg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:20.575 [INFO][4704] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" HandleID="k8s-pod-network.ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:20.576 [INFO][4704] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" HandleID="k8s-pod-network.ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cf7fddbf6-qfkg6", "timestamp":"2025-10-31 00:48:20.57594847 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:20.576 [INFO][4704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:21.573 [INFO][4704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:21.573 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:21.719 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" host="localhost" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:21.772 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:21.894 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:21.904 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:21.934 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:21.934 [INFO][4704] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" host="localhost" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:21.937 [INFO][4704] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:22.026 [INFO][4704] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" host="localhost" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:22.038 [INFO][4704] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" host="localhost" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:22.038 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" host="localhost" Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:22.038 [INFO][4704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:22.091654 containerd[1459]: 2025-10-31 00:48:22.038 [INFO][4704] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" HandleID="k8s-pod-network.ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:22.092642 containerd[1459]: 2025-10-31 00:48:22.048 [INFO][4618] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-qfkg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0", GenerateName:"calico-apiserver-7cf7fddbf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a3f6669-a62a-42ec-9a82-372bbb7049fb", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf7fddbf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cf7fddbf6-qfkg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16ab57abb90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:22.092642 containerd[1459]: 2025-10-31 00:48:22.048 [INFO][4618] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-qfkg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:22.092642 containerd[1459]: 2025-10-31 00:48:22.048 [INFO][4618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16ab57abb90 ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-qfkg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:22.092642 containerd[1459]: 2025-10-31 00:48:22.058 [INFO][4618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-qfkg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:22.092642 containerd[1459]: 2025-10-31 00:48:22.059 [INFO][4618] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-qfkg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0", GenerateName:"calico-apiserver-7cf7fddbf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a3f6669-a62a-42ec-9a82-372bbb7049fb", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf7fddbf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc", Pod:"calico-apiserver-7cf7fddbf6-qfkg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16ab57abb90", MAC:"3a:5f:d2:ee:4a:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:22.092642 containerd[1459]: 2025-10-31 00:48:22.076 [INFO][4618] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc" Namespace="calico-apiserver" Pod="calico-apiserver-7cf7fddbf6-qfkg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:22.134104 systemd-networkd[1388]: cali551c2e6172a: Link UP Oct 31 00:48:22.143883 systemd-networkd[1388]: cali551c2e6172a: Gained carrier Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:20.955 [INFO][4726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:20.955 [INFO][4726] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" iface="eth0" netns="/var/run/netns/cni-eb384c40-1733-f1df-11be-d126eeca6661" Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:20.957 [INFO][4726] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" iface="eth0" netns="/var/run/netns/cni-eb384c40-1733-f1df-11be-d126eeca6661" Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:20.958 [INFO][4726] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" iface="eth0" netns="/var/run/netns/cni-eb384c40-1733-f1df-11be-d126eeca6661" Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:20.958 [INFO][4726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:20.958 [INFO][4726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:20.978 [INFO][4738] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" HandleID="k8s-pod-network.3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:20.978 [INFO][4738] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:22.123 [INFO][4738] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:22.135 [WARNING][4738] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" HandleID="k8s-pod-network.3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:22.135 [INFO][4738] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" HandleID="k8s-pod-network.3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:22.137 [INFO][4738] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:22.153903 containerd[1459]: 2025-10-31 00:48:22.149 [INFO][4726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:22.154301 containerd[1459]: time="2025-10-31T00:48:22.154146462Z" level=info msg="TearDown network for sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\" successfully" Oct 31 00:48:22.154301 containerd[1459]: time="2025-10-31T00:48:22.154209250Z" level=info msg="StopPodSandbox for \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\" returns successfully" Oct 31 00:48:22.156609 containerd[1459]: time="2025-10-31T00:48:22.156184587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cznnv,Uid:d615dcdd-9217-4b99-9985-812be6d75b53,Namespace:calico-system,Attempt:1,}" Oct 31 00:48:22.163379 containerd[1459]: time="2025-10-31T00:48:22.163251329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:48:22.163554 containerd[1459]: time="2025-10-31T00:48:22.163362508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:48:22.163554 containerd[1459]: time="2025-10-31T00:48:22.163382415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:22.164746 containerd[1459]: time="2025-10-31T00:48:22.164668319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:22.172917 containerd[1459]: time="2025-10-31T00:48:22.172830077Z" level=info msg="StartContainer for \"4f71a2ecd656f39d43f24175d7e6697f4e014549f8a4e7ba5e3ad124cc462022\" returns successfully" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:20.527 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0 coredns-674b8bbfcf- kube-system 4dc3fd13-e452-47cf-9e08-a4a9c785070b 1041 0 2025-10-31 00:47:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-h9lbq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali551c2e6172a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9lbq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9lbq-" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:20.527 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9lbq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:20.593 [INFO][4706] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" HandleID="k8s-pod-network.d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:20.594 [INFO][4706] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" HandleID="k8s-pod-network.d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052cac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-h9lbq", "timestamp":"2025-10-31 00:48:20.593752759 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:20.594 [INFO][4706] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.038 [INFO][4706] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.038 [INFO][4706] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.053 [INFO][4706] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" host="localhost" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.071 [INFO][4706] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.089 [INFO][4706] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.093 [INFO][4706] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.098 [INFO][4706] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.098 [INFO][4706] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" host="localhost" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.100 [INFO][4706] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28 Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.109 [INFO][4706] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" host="localhost" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.120 [INFO][4706] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" host="localhost" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.121 [INFO][4706] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" host="localhost" Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.121 [INFO][4706] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:22.179572 containerd[1459]: 2025-10-31 00:48:22.121 [INFO][4706] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" HandleID="k8s-pod-network.d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:22.180429 containerd[1459]: 2025-10-31 00:48:22.127 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9lbq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4dc3fd13-e452-47cf-9e08-a4a9c785070b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-h9lbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali551c2e6172a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:22.180429 containerd[1459]: 2025-10-31 00:48:22.127 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9lbq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:22.180429 containerd[1459]: 2025-10-31 00:48:22.127 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali551c2e6172a ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9lbq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:22.180429 containerd[1459]: 2025-10-31 00:48:22.147 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9lbq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:22.180429 containerd[1459]: 2025-10-31 00:48:22.148 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9lbq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4dc3fd13-e452-47cf-9e08-a4a9c785070b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28", Pod:"coredns-674b8bbfcf-h9lbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali551c2e6172a", MAC:"2a:d6:7a:7b:dd:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:22.180429 containerd[1459]: 2025-10-31 00:48:22.173 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9lbq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:22.190057 systemd[1]: Started cri-containerd-ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc.scope - libcontainer container ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc. Oct 31 00:48:22.200546 kubelet[2502]: E1031 00:48:22.199396 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:22.205657 kubelet[2502]: E1031 00:48:22.205474 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" podUID="f0ebaf56-bc9f-4f20-80ce-c5c77074a573" Oct 31 00:48:22.206587 kubelet[2502]: E1031 00:48:22.206540 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d455ff89f-sljxb" podUID="bc7de0b5-fad9-4849-950f-64958f0873ad" Oct 31 00:48:22.235622 containerd[1459]: time="2025-10-31T00:48:22.235121774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:48:22.235622 containerd[1459]: time="2025-10-31T00:48:22.235188469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:48:22.235622 containerd[1459]: time="2025-10-31T00:48:22.235221371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:22.235622 containerd[1459]: time="2025-10-31T00:48:22.235355362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:22.257464 kubelet[2502]: I1031 00:48:22.257381 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8zcwq" podStartSLOduration=51.25725891 podStartE2EDuration="51.25725891s" podCreationTimestamp="2025-10-31 00:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:48:22.226500643 +0000 UTC m=+57.766048823" watchObservedRunningTime="2025-10-31 00:48:22.25725891 +0000 UTC m=+57.796807090" Oct 31 00:48:22.271134 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:48:22.277887 systemd[1]: Started cri-containerd-d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28.scope - libcontainer container d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28. Oct 31 00:48:22.300537 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:48:22.307116 containerd[1459]: time="2025-10-31T00:48:22.307072187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf7fddbf6-qfkg6,Uid:4a3f6669-a62a-42ec-9a82-372bbb7049fb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc\"" Oct 31 00:48:22.311353 containerd[1459]: time="2025-10-31T00:48:22.311304060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:48:22.344606 containerd[1459]: time="2025-10-31T00:48:22.344565113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9lbq,Uid:4dc3fd13-e452-47cf-9e08-a4a9c785070b,Namespace:kube-system,Attempt:1,} returns sandbox id \"d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28\"" Oct 31 00:48:22.346182 kubelet[2502]: E1031 00:48:22.345949 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:22.363076 containerd[1459]: time="2025-10-31T00:48:22.362930250Z" level=info msg="CreateContainer within sandbox \"d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:48:22.397868 containerd[1459]: time="2025-10-31T00:48:22.397797107Z" level=info msg="CreateContainer within sandbox \"d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d42b1738253bcabfd9ee6833ba5531b98804339690070a9ab7c4e2370597445\"" Oct 31 00:48:22.399374 containerd[1459]: time="2025-10-31T00:48:22.399292393Z" level=info msg="StartContainer for \"2d42b1738253bcabfd9ee6833ba5531b98804339690070a9ab7c4e2370597445\"" Oct 31 00:48:22.428063 systemd-networkd[1388]: cali9d32c349d96: Link UP Oct 31 00:48:22.430878 systemd-networkd[1388]: cali9d32c349d96: Gained carrier Oct 31 00:48:22.449666 systemd[1]: Started cri-containerd-2d42b1738253bcabfd9ee6833ba5531b98804339690070a9ab7c4e2370597445.scope - libcontainer container 2d42b1738253bcabfd9ee6833ba5531b98804339690070a9ab7c4e2370597445. Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.269 [INFO][4888] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cznnv-eth0 csi-node-driver- calico-system d615dcdd-9217-4b99-9985-812be6d75b53 1060 0 2025-10-31 00:47:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cznnv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9d32c349d96 [] [] }} ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Namespace="calico-system" Pod="csi-node-driver-cznnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cznnv-" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.269 [INFO][4888] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Namespace="calico-system" Pod="csi-node-driver-cznnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.360 [INFO][4951] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" HandleID="k8s-pod-network.d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.360 [INFO][4951] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" HandleID="k8s-pod-network.d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000be270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cznnv", "timestamp":"2025-10-31 00:48:22.360358122 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.360 [INFO][4951] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.360 [INFO][4951] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.360 [INFO][4951] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.374 [INFO][4951] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" host="localhost" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.386 [INFO][4951] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.392 [INFO][4951] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.393 [INFO][4951] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.396 [INFO][4951] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.396 [INFO][4951] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" host="localhost" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.397 [INFO][4951] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474 Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.403 [INFO][4951] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" host="localhost" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.414 [INFO][4951] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" host="localhost" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.415 [INFO][4951] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" host="localhost" Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.415 [INFO][4951] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:22.474294 containerd[1459]: 2025-10-31 00:48:22.415 [INFO][4951] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" HandleID="k8s-pod-network.d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:22.477617 containerd[1459]: 2025-10-31 00:48:22.420 [INFO][4888] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Namespace="calico-system" Pod="csi-node-driver-cznnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cznnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cznnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d615dcdd-9217-4b99-9985-812be6d75b53", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cznnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9d32c349d96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:22.477617 containerd[1459]: 2025-10-31 00:48:22.420 [INFO][4888] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Namespace="calico-system" Pod="csi-node-driver-cznnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:22.477617 containerd[1459]: 2025-10-31 00:48:22.420 [INFO][4888] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d32c349d96 ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Namespace="calico-system" Pod="csi-node-driver-cznnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:22.477617 containerd[1459]: 2025-10-31 00:48:22.431 [INFO][4888] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Namespace="calico-system" Pod="csi-node-driver-cznnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:22.477617 containerd[1459]: 2025-10-31 00:48:22.439 [INFO][4888] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Namespace="calico-system" Pod="csi-node-driver-cznnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cznnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cznnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d615dcdd-9217-4b99-9985-812be6d75b53", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474", Pod:"csi-node-driver-cznnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9d32c349d96", MAC:"16:c8:c6:ad:57:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:22.477617 containerd[1459]: 2025-10-31 00:48:22.463 [INFO][4888] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474" Namespace="calico-system" Pod="csi-node-driver-cznnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:22.497789 containerd[1459]: time="2025-10-31T00:48:22.497735087Z" level=info msg="StartContainer for \"2d42b1738253bcabfd9ee6833ba5531b98804339690070a9ab7c4e2370597445\" returns successfully" Oct 31 00:48:22.511154 containerd[1459]: time="2025-10-31T00:48:22.511008477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:48:22.511154 containerd[1459]: time="2025-10-31T00:48:22.511066666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:48:22.511154 containerd[1459]: time="2025-10-31T00:48:22.511091143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:22.511505 containerd[1459]: time="2025-10-31T00:48:22.511213292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:48:22.537784 systemd[1]: Started cri-containerd-d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474.scope - libcontainer container d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474. Oct 31 00:48:22.577920 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:48:22.598184 containerd[1459]: time="2025-10-31T00:48:22.598139081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cznnv,Uid:d615dcdd-9217-4b99-9985-812be6d75b53,Namespace:calico-system,Attempt:1,} returns sandbox id \"d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474\"" Oct 31 00:48:22.687333 containerd[1459]: time="2025-10-31T00:48:22.687235504Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:22.712680 containerd[1459]: time="2025-10-31T00:48:22.712574469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:48:22.712915 containerd[1459]: time="2025-10-31T00:48:22.712733789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:48:22.713060 kubelet[2502]: E1031 00:48:22.712975 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:48:22.713143 kubelet[2502]: E1031 00:48:22.713054 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:48:22.713519 kubelet[2502]: E1031 00:48:22.713396 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ql48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cf7fddbf6-qfkg6_calico-apiserver(4a3f6669-a62a-42ec-9a82-372bbb7049fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:22.713802 containerd[1459]: time="2025-10-31T00:48:22.713694441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:48:22.715424 kubelet[2502]: E1031 00:48:22.715331 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" podUID="4a3f6669-a62a-42ec-9a82-372bbb7049fb" Oct 31 00:48:22.801982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567909714.mount: Deactivated successfully. Oct 31 00:48:22.802109 systemd[1]: run-netns-cni\x2deb384c40\x2d1733\x2df1df\x2d11be\x2dd126eeca6661.mount: Deactivated successfully. Oct 31 00:48:22.917737 systemd-networkd[1388]: caliaef1cf7cd63: Gained IPv6LL Oct 31 00:48:23.058643 containerd[1459]: time="2025-10-31T00:48:23.058483985Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:23.077766 containerd[1459]: time="2025-10-31T00:48:23.077548883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:48:23.077766 containerd[1459]: time="2025-10-31T00:48:23.077686611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:48:23.077954 kubelet[2502]: E1031 00:48:23.077908 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:48:23.078002 kubelet[2502]: E1031 00:48:23.077974 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:48:23.078189 kubelet[2502]: E1031 00:48:23.078135 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzh5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cznnv_calico-system(d615dcdd-9217-4b99-9985-812be6d75b53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:23.080739 containerd[1459]: time="2025-10-31T00:48:23.080711148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:48:23.206229 kubelet[2502]: E1031 00:48:23.206095 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:23.208810 kubelet[2502]: E1031 00:48:23.208740 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" podUID="4a3f6669-a62a-42ec-9a82-372bbb7049fb" Oct 31 00:48:23.211338 kubelet[2502]: E1031 00:48:23.211279 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:23.364714 systemd-networkd[1388]: cali551c2e6172a: Gained IPv6LL Oct 31 00:48:23.427666 containerd[1459]: time="2025-10-31T00:48:23.427582503Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:23.467356 kubelet[2502]: I1031 00:48:23.466719 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h9lbq" podStartSLOduration=52.466698502 podStartE2EDuration="52.466698502s" podCreationTimestamp="2025-10-31 00:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:48:23.466146586 +0000 UTC m=+59.005694746" watchObservedRunningTime="2025-10-31 00:48:23.466698502 +0000 UTC m=+59.006246662" Oct 31 00:48:23.493681 systemd-networkd[1388]: cali9d32c349d96: Gained IPv6LL Oct 31 00:48:23.533082 containerd[1459]: time="2025-10-31T00:48:23.532959992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:48:23.533309 containerd[1459]: time="2025-10-31T00:48:23.532990760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:48:23.533499 kubelet[2502]: E1031 00:48:23.533446 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:48:23.533564 kubelet[2502]: E1031 00:48:23.533512 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:48:23.533806 kubelet[2502]: E1031 00:48:23.533739 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzh5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cznnv_calico-system(d615dcdd-9217-4b99-9985-812be6d75b53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:23.535016 kubelet[2502]: E1031 00:48:23.534950 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:48:23.876749 systemd-networkd[1388]: cali16ab57abb90: Gained IPv6LL Oct 31 00:48:24.213360 kubelet[2502]: E1031 00:48:24.213037 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:24.213360 kubelet[2502]: E1031 00:48:24.213206 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:24.214537 kubelet[2502]: E1031 00:48:24.214125 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" podUID="4a3f6669-a62a-42ec-9a82-372bbb7049fb" Oct 31 00:48:24.214537 kubelet[2502]: E1031 00:48:24.214456 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:48:24.535678 containerd[1459]: time="2025-10-31T00:48:24.535552951Z" level=info msg="StopPodSandbox for \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\"" Oct 31 00:48:25.004347 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:55844.service - OpenSSH per-connection server daemon (10.0.0.1:55844). Oct 31 00:48:25.073390 sshd[5103]: Accepted publickey for core from 10.0.0.1 port 55844 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:25.075698 sshd[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:25.080285 systemd-logind[1449]: New session 11 of user core. Oct 31 00:48:25.086543 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 00:48:25.219763 kubelet[2502]: E1031 00:48:25.219726 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:24.655 [WARNING][5079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f5a896d8-63fb-485d-b0d7-8486be09050d", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060", Pod:"coredns-674b8bbfcf-8zcwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaef1cf7cd63", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:24.655 [INFO][5079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:24.655 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" iface="eth0" netns="" Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:24.655 [INFO][5079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:24.655 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:24.681 [INFO][5090] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" HandleID="k8s-pod-network.b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:24.681 [INFO][5090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:24.681 [INFO][5090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:25.186 [WARNING][5090] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" HandleID="k8s-pod-network.b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:25.187 [INFO][5090] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" HandleID="k8s-pod-network.b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:25.213 [INFO][5090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:25.223793 containerd[1459]: 2025-10-31 00:48:25.218 [INFO][5079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:25.223793 containerd[1459]: time="2025-10-31T00:48:25.223539034Z" level=info msg="TearDown network for sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\" successfully" Oct 31 00:48:25.223793 containerd[1459]: time="2025-10-31T00:48:25.223576384Z" level=info msg="StopPodSandbox for \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\" returns successfully" Oct 31 00:48:25.224496 containerd[1459]: time="2025-10-31T00:48:25.224300162Z" level=info msg="RemovePodSandbox for \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\"" Oct 31 00:48:25.227359 containerd[1459]: time="2025-10-31T00:48:25.227313207Z" level=info msg="Forcibly stopping sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\"" Oct 31 00:48:25.558649 sshd[5103]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:25.563165 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:55844.service: Deactivated successfully. Oct 31 00:48:25.565872 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 00:48:25.566853 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Oct 31 00:48:25.567846 systemd-logind[1449]: Removed session 11. Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.529 [WARNING][5126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f5a896d8-63fb-485d-b0d7-8486be09050d", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c96ee2823793d5350d5aa02ea4fa22660b9cbdf15c1fb6ed3b3c6ac35e267060", Pod:"coredns-674b8bbfcf-8zcwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaef1cf7cd63", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.530 [INFO][5126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.530 [INFO][5126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" iface="eth0" netns="" Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.530 [INFO][5126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.530 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.554 [INFO][5135] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" HandleID="k8s-pod-network.b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.554 [INFO][5135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.554 [INFO][5135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.575 [WARNING][5135] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" HandleID="k8s-pod-network.b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.575 [INFO][5135] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" HandleID="k8s-pod-network.b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Workload="localhost-k8s-coredns--674b8bbfcf--8zcwq-eth0" Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.617 [INFO][5135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:25.624110 containerd[1459]: 2025-10-31 00:48:25.620 [INFO][5126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d" Oct 31 00:48:25.625066 containerd[1459]: time="2025-10-31T00:48:25.624188129Z" level=info msg="TearDown network for sandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\" successfully" Oct 31 00:48:25.746474 containerd[1459]: time="2025-10-31T00:48:25.746351771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:48:25.746474 containerd[1459]: time="2025-10-31T00:48:25.746484259Z" level=info msg="RemovePodSandbox \"b876a09b7cea07fc5b5f0fb2dd5212854240663bab0c4ee0ca39a509b8f0cd1d\" returns successfully" Oct 31 00:48:25.747925 containerd[1459]: time="2025-10-31T00:48:25.747354523Z" level=info msg="StopPodSandbox for \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\"" Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:25.982 [WARNING][5156] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--47d96-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a6a2171a-de8b-4154-86b8-cb6aefca8e5b", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5", Pod:"goldmane-666569f655-47d96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ac12189205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:25.982 [INFO][5156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:25.983 [INFO][5156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" iface="eth0" netns="" Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:25.983 [INFO][5156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:25.983 [INFO][5156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:26.004 [INFO][5164] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" HandleID="k8s-pod-network.08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:26.004 [INFO][5164] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:26.004 [INFO][5164] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:26.093 [WARNING][5164] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" HandleID="k8s-pod-network.08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:26.094 [INFO][5164] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" HandleID="k8s-pod-network.08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:26.096 [INFO][5164] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:26.101485 containerd[1459]: 2025-10-31 00:48:26.098 [INFO][5156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:26.102044 containerd[1459]: time="2025-10-31T00:48:26.101557954Z" level=info msg="TearDown network for sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\" successfully" Oct 31 00:48:26.102044 containerd[1459]: time="2025-10-31T00:48:26.101595765Z" level=info msg="StopPodSandbox for \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\" returns successfully" Oct 31 00:48:26.102251 containerd[1459]: time="2025-10-31T00:48:26.102196031Z" level=info msg="RemovePodSandbox for \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\"" Oct 31 00:48:26.102292 containerd[1459]: time="2025-10-31T00:48:26.102256956Z" level=info msg="Forcibly stopping sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\"" Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.139 [WARNING][5182] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--47d96-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a6a2171a-de8b-4154-86b8-cb6aefca8e5b", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a3a3521f4f921e9f773d836ef01034ff392c4e4693af30d95823e2ae7c2aae5", Pod:"goldmane-666569f655-47d96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ac12189205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.140 [INFO][5182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.140 [INFO][5182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" iface="eth0" netns="" Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.140 [INFO][5182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.140 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.160 [INFO][5191] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" HandleID="k8s-pod-network.08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.161 [INFO][5191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.161 [INFO][5191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.299 [WARNING][5191] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" HandleID="k8s-pod-network.08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.299 [INFO][5191] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" HandleID="k8s-pod-network.08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Workload="localhost-k8s-goldmane--666569f655--47d96-eth0" Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.302 [INFO][5191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:26.308669 containerd[1459]: 2025-10-31 00:48:26.305 [INFO][5182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2" Oct 31 00:48:26.309215 containerd[1459]: time="2025-10-31T00:48:26.308675719Z" level=info msg="TearDown network for sandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\" successfully" Oct 31 00:48:26.315345 containerd[1459]: time="2025-10-31T00:48:26.314187281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:48:26.315345 containerd[1459]: time="2025-10-31T00:48:26.314285265Z" level=info msg="RemovePodSandbox \"08372b32327ed178fb06ac43eaaaf9ff3ba1e8b63c63cfd0410a9890096b1ea2\" returns successfully" Oct 31 00:48:26.315345 containerd[1459]: time="2025-10-31T00:48:26.315053096Z" level=info msg="StopPodSandbox for \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\"" Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.358 [WARNING][5209] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" WorkloadEndpoint="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.358 [INFO][5209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.358 [INFO][5209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" iface="eth0" netns="" Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.358 [INFO][5209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.358 [INFO][5209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.383 [INFO][5217] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" HandleID="k8s-pod-network.fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Workload="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.384 [INFO][5217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.384 [INFO][5217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.392 [WARNING][5217] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" HandleID="k8s-pod-network.fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Workload="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.392 [INFO][5217] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" HandleID="k8s-pod-network.fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Workload="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.394 [INFO][5217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:26.400577 containerd[1459]: 2025-10-31 00:48:26.397 [INFO][5209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:26.401072 containerd[1459]: time="2025-10-31T00:48:26.400638281Z" level=info msg="TearDown network for sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\" successfully" Oct 31 00:48:26.401072 containerd[1459]: time="2025-10-31T00:48:26.400675892Z" level=info msg="StopPodSandbox for \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\" returns successfully" Oct 31 00:48:26.403427 containerd[1459]: time="2025-10-31T00:48:26.401575270Z" level=info msg="RemovePodSandbox for \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\"" Oct 31 00:48:26.403427 containerd[1459]: time="2025-10-31T00:48:26.401618621Z" level=info msg="Forcibly stopping sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\"" Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.456 [WARNING][5235] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" WorkloadEndpoint="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.456 [INFO][5235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.456 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" iface="eth0" netns="" Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.456 [INFO][5235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.456 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.488 [INFO][5244] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" HandleID="k8s-pod-network.fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Workload="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.488 [INFO][5244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.488 [INFO][5244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.496 [WARNING][5244] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" HandleID="k8s-pod-network.fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Workload="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.496 [INFO][5244] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" HandleID="k8s-pod-network.fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Workload="localhost-k8s-whisker--5ddbb6d7b7--2kd7r-eth0" Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.497 [INFO][5244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:26.504650 containerd[1459]: 2025-10-31 00:48:26.501 [INFO][5235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493" Oct 31 00:48:26.505201 containerd[1459]: time="2025-10-31T00:48:26.504710286Z" level=info msg="TearDown network for sandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\" successfully" Oct 31 00:48:26.510288 containerd[1459]: time="2025-10-31T00:48:26.510147389Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:48:26.510288 containerd[1459]: time="2025-10-31T00:48:26.510258046Z" level=info msg="RemovePodSandbox \"fac503fc3e01ef5dc21369d30d6a7b813e3105bd16721a0d618cbd9ef280f493\" returns successfully" Oct 31 00:48:26.511163 containerd[1459]: time="2025-10-31T00:48:26.511094536Z" level=info msg="StopPodSandbox for \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\"" Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.560 [WARNING][5262] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4dc3fd13-e452-47cf-9e08-a4a9c785070b", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28", Pod:"coredns-674b8bbfcf-h9lbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali551c2e6172a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.561 [INFO][5262] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.561 [INFO][5262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" iface="eth0" netns="" Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.561 [INFO][5262] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.561 [INFO][5262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.592 [INFO][5271] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" HandleID="k8s-pod-network.1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.593 [INFO][5271] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.593 [INFO][5271] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.599 [WARNING][5271] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" HandleID="k8s-pod-network.1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.599 [INFO][5271] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" HandleID="k8s-pod-network.1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.601 [INFO][5271] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:26.609275 containerd[1459]: 2025-10-31 00:48:26.604 [INFO][5262] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:26.610046 containerd[1459]: time="2025-10-31T00:48:26.609300584Z" level=info msg="TearDown network for sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\" successfully" Oct 31 00:48:26.610046 containerd[1459]: time="2025-10-31T00:48:26.609335530Z" level=info msg="StopPodSandbox for \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\" returns successfully" Oct 31 00:48:26.610046 containerd[1459]: time="2025-10-31T00:48:26.609905560Z" level=info msg="RemovePodSandbox for \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\"" Oct 31 00:48:26.610046 containerd[1459]: time="2025-10-31T00:48:26.609937039Z" level=info msg="Forcibly stopping sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\"" Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.650 [WARNING][5290] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4dc3fd13-e452-47cf-9e08-a4a9c785070b", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8e632126b002720a4587a3333eb0d40dc1584eed1cacbf7eb047facf4652d28", Pod:"coredns-674b8bbfcf-h9lbq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali551c2e6172a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.651 [INFO][5290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.651 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" iface="eth0" netns="" Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.651 [INFO][5290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.651 [INFO][5290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.673 [INFO][5299] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" HandleID="k8s-pod-network.1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.673 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.673 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.680 [WARNING][5299] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" HandleID="k8s-pod-network.1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.680 [INFO][5299] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" HandleID="k8s-pod-network.1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Workload="localhost-k8s-coredns--674b8bbfcf--h9lbq-eth0" Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.681 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:26.687518 containerd[1459]: 2025-10-31 00:48:26.684 [INFO][5290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc" Oct 31 00:48:26.687518 containerd[1459]: time="2025-10-31T00:48:26.687493409Z" level=info msg="TearDown network for sandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\" successfully" Oct 31 00:48:26.756119 containerd[1459]: time="2025-10-31T00:48:26.755373011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:48:26.756119 containerd[1459]: time="2025-10-31T00:48:26.755746703Z" level=info msg="RemovePodSandbox \"1e2381349217450f97fa9b9526a3392c8da38ebba799a9c84d3c5e2334381dfc\" returns successfully" Oct 31 00:48:26.756544 containerd[1459]: time="2025-10-31T00:48:26.756500347Z" level=info msg="StopPodSandbox for \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\"" Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.801 [WARNING][5318] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0", GenerateName:"calico-apiserver-7cf7fddbf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0ebaf56-bc9f-4f20-80ce-c5c77074a573", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf7fddbf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356", Pod:"calico-apiserver-7cf7fddbf6-nr7tc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59f19d12fea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.801 [INFO][5318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.801 [INFO][5318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" iface="eth0" netns="" Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.801 [INFO][5318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.801 [INFO][5318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.827 [INFO][5328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" HandleID="k8s-pod-network.4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.827 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.827 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.834 [WARNING][5328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" HandleID="k8s-pod-network.4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.834 [INFO][5328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" HandleID="k8s-pod-network.4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.837 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:26.843660 containerd[1459]: 2025-10-31 00:48:26.840 [INFO][5318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:26.844232 containerd[1459]: time="2025-10-31T00:48:26.843718406Z" level=info msg="TearDown network for sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\" successfully" Oct 31 00:48:26.844232 containerd[1459]: time="2025-10-31T00:48:26.843761747Z" level=info msg="StopPodSandbox for \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\" returns successfully" Oct 31 00:48:26.845234 containerd[1459]: time="2025-10-31T00:48:26.845185990Z" level=info msg="RemovePodSandbox for \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\"" Oct 31 00:48:26.845308 containerd[1459]: time="2025-10-31T00:48:26.845246894Z" level=info msg="Forcibly stopping sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\"" Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.896 [WARNING][5346] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0", GenerateName:"calico-apiserver-7cf7fddbf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0ebaf56-bc9f-4f20-80ce-c5c77074a573", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf7fddbf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2dd691cffe744167b4e5d7d916f861f1497c1d247387ab91b81ff4b892c8356", Pod:"calico-apiserver-7cf7fddbf6-nr7tc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59f19d12fea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.896 [INFO][5346] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.896 [INFO][5346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" iface="eth0" netns="" Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.896 [INFO][5346] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.896 [INFO][5346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.923 [INFO][5355] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" HandleID="k8s-pod-network.4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.923 [INFO][5355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.923 [INFO][5355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.932 [WARNING][5355] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" HandleID="k8s-pod-network.4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.933 [INFO][5355] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" HandleID="k8s-pod-network.4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--nr7tc-eth0" Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.935 [INFO][5355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:26.941756 containerd[1459]: 2025-10-31 00:48:26.938 [INFO][5346] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3" Oct 31 00:48:26.941756 containerd[1459]: time="2025-10-31T00:48:26.941695996Z" level=info msg="TearDown network for sandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\" successfully" Oct 31 00:48:26.998309 containerd[1459]: time="2025-10-31T00:48:26.998232183Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:48:26.998485 containerd[1459]: time="2025-10-31T00:48:26.998372847Z" level=info msg="RemovePodSandbox \"4917309cf223abb7cb4a0df29d09aafee31e2dfd8f0ad9af9775d5339b749de3\" returns successfully" Oct 31 00:48:26.999108 containerd[1459]: time="2025-10-31T00:48:26.999074644Z" level=info msg="StopPodSandbox for \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\"" Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.073 [WARNING][5372] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0", GenerateName:"calico-kube-controllers-78f5ccdb8f-", Namespace:"calico-system", SelfLink:"", UID:"7d011812-0c54-49d2-a84d-25c0746a58a0", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f5ccdb8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954", Pod:"calico-kube-controllers-78f5ccdb8f-sfj2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e77ee5b79b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.073 [INFO][5372] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.073 [INFO][5372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" iface="eth0" netns="" Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.073 [INFO][5372] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.073 [INFO][5372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.098 [INFO][5380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" HandleID="k8s-pod-network.1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.098 [INFO][5380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.098 [INFO][5380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.107 [WARNING][5380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" HandleID="k8s-pod-network.1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.107 [INFO][5380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" HandleID="k8s-pod-network.1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.110 [INFO][5380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:27.116283 containerd[1459]: 2025-10-31 00:48:27.113 [INFO][5372] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:27.116756 containerd[1459]: time="2025-10-31T00:48:27.116316253Z" level=info msg="TearDown network for sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\" successfully" Oct 31 00:48:27.116756 containerd[1459]: time="2025-10-31T00:48:27.116353973Z" level=info msg="StopPodSandbox for \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\" returns successfully" Oct 31 00:48:27.117072 containerd[1459]: time="2025-10-31T00:48:27.117030954Z" level=info msg="RemovePodSandbox for \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\"" Oct 31 00:48:27.117072 containerd[1459]: time="2025-10-31T00:48:27.117068645Z" level=info msg="Forcibly stopping sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\"" Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.169 [WARNING][5397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0", GenerateName:"calico-kube-controllers-78f5ccdb8f-", Namespace:"calico-system", SelfLink:"", UID:"7d011812-0c54-49d2-a84d-25c0746a58a0", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f5ccdb8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d453cde58e5408e252811279b1f10e87eedfa5a560d1143e3a365d31bd2f6954", Pod:"calico-kube-controllers-78f5ccdb8f-sfj2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e77ee5b79b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.169 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.169 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" iface="eth0" netns="" Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.169 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.169 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.193 [INFO][5406] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" HandleID="k8s-pod-network.1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.193 [INFO][5406] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.193 [INFO][5406] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.200 [WARNING][5406] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" HandleID="k8s-pod-network.1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.200 [INFO][5406] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" HandleID="k8s-pod-network.1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Workload="localhost-k8s-calico--kube--controllers--78f5ccdb8f--sfj2g-eth0" Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.202 [INFO][5406] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:27.209487 containerd[1459]: 2025-10-31 00:48:27.206 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989" Oct 31 00:48:27.209487 containerd[1459]: time="2025-10-31T00:48:27.209431602Z" level=info msg="TearDown network for sandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\" successfully" Oct 31 00:48:27.216080 containerd[1459]: time="2025-10-31T00:48:27.216011018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:48:27.216253 containerd[1459]: time="2025-10-31T00:48:27.216101958Z" level=info msg="RemovePodSandbox \"1df4d0643c65cfaf1ecef80a4e3d324aca891b5555eb3aaf1aa4b55937254989\" returns successfully" Oct 31 00:48:27.218092 containerd[1459]: time="2025-10-31T00:48:27.217055127Z" level=info msg="StopPodSandbox for \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\"" Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.258 [WARNING][5424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cznnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d615dcdd-9217-4b99-9985-812be6d75b53", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474", Pod:"csi-node-driver-cznnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9d32c349d96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.259 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.259 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" iface="eth0" netns="" Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.259 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.259 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.281 [INFO][5433] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" HandleID="k8s-pod-network.3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.281 [INFO][5433] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.281 [INFO][5433] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.288 [WARNING][5433] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" HandleID="k8s-pod-network.3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.288 [INFO][5433] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" HandleID="k8s-pod-network.3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.291 [INFO][5433] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:27.299091 containerd[1459]: 2025-10-31 00:48:27.295 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:27.299091 containerd[1459]: time="2025-10-31T00:48:27.299081499Z" level=info msg="TearDown network for sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\" successfully" Oct 31 00:48:27.299866 containerd[1459]: time="2025-10-31T00:48:27.299114010Z" level=info msg="StopPodSandbox for \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\" returns successfully" Oct 31 00:48:27.299866 containerd[1459]: time="2025-10-31T00:48:27.299734665Z" level=info msg="RemovePodSandbox for \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\"" Oct 31 00:48:27.299866 containerd[1459]: time="2025-10-31T00:48:27.299772937Z" level=info msg="Forcibly stopping sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\"" Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.339 [WARNING][5451] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cznnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d615dcdd-9217-4b99-9985-812be6d75b53", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d99dd13f5f7a143aef6be4d26005e488e5f26eb3af7badccbc6cf0e195afd474", Pod:"csi-node-driver-cznnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9d32c349d96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.339 [INFO][5451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.339 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" iface="eth0" netns="" Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.339 [INFO][5451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.339 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.361 [INFO][5460] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" HandleID="k8s-pod-network.3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.361 [INFO][5460] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.361 [INFO][5460] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.370 [WARNING][5460] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" HandleID="k8s-pod-network.3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.370 [INFO][5460] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" HandleID="k8s-pod-network.3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Workload="localhost-k8s-csi--node--driver--cznnv-eth0" Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.371 [INFO][5460] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:27.377537 containerd[1459]: 2025-10-31 00:48:27.374 [INFO][5451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea" Oct 31 00:48:27.378036 containerd[1459]: time="2025-10-31T00:48:27.377587566Z" level=info msg="TearDown network for sandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\" successfully" Oct 31 00:48:27.382142 containerd[1459]: time="2025-10-31T00:48:27.382102689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:48:27.382221 containerd[1459]: time="2025-10-31T00:48:27.382158543Z" level=info msg="RemovePodSandbox \"3e0f54bf01b79afe10cb3e31bbbe893d92c60bea7127b57519a0354ffba411ea\" returns successfully" Oct 31 00:48:27.382744 containerd[1459]: time="2025-10-31T00:48:27.382718103Z" level=info msg="StopPodSandbox for \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\"" Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.417 [WARNING][5477] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0", GenerateName:"calico-apiserver-7cf7fddbf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a3f6669-a62a-42ec-9a82-372bbb7049fb", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf7fddbf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc", Pod:"calico-apiserver-7cf7fddbf6-qfkg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16ab57abb90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.417 [INFO][5477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.417 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" iface="eth0" netns="" Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.417 [INFO][5477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.417 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.440 [INFO][5486] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" HandleID="k8s-pod-network.3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.441 [INFO][5486] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.441 [INFO][5486] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.455 [WARNING][5486] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" HandleID="k8s-pod-network.3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.455 [INFO][5486] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" HandleID="k8s-pod-network.3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.457 [INFO][5486] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:27.463853 containerd[1459]: 2025-10-31 00:48:27.460 [INFO][5477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:27.463853 containerd[1459]: time="2025-10-31T00:48:27.463816795Z" level=info msg="TearDown network for sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\" successfully" Oct 31 00:48:27.463853 containerd[1459]: time="2025-10-31T00:48:27.463849947Z" level=info msg="StopPodSandbox for \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\" returns successfully" Oct 31 00:48:27.464476 containerd[1459]: time="2025-10-31T00:48:27.464452167Z" level=info msg="RemovePodSandbox for \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\"" Oct 31 00:48:27.464515 containerd[1459]: time="2025-10-31T00:48:27.464483706Z" level=info msg="Forcibly stopping sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\"" Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.498 [WARNING][5503] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0", GenerateName:"calico-apiserver-7cf7fddbf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a3f6669-a62a-42ec-9a82-372bbb7049fb", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 47, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf7fddbf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca685625f6a0c50ef96f5074a3318f4920c745ade6d20b950fe49e2636499adc", Pod:"calico-apiserver-7cf7fddbf6-qfkg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16ab57abb90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.499 [INFO][5503] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.499 [INFO][5503] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" iface="eth0" netns="" Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.499 [INFO][5503] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.499 [INFO][5503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.520 [INFO][5512] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" HandleID="k8s-pod-network.3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.528 [INFO][5512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.528 [INFO][5512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.550 [WARNING][5512] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" HandleID="k8s-pod-network.3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.551 [INFO][5512] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" HandleID="k8s-pod-network.3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Workload="localhost-k8s-calico--apiserver--7cf7fddbf6--qfkg6-eth0" Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.552 [INFO][5512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:48:27.558272 containerd[1459]: 2025-10-31 00:48:27.555 [INFO][5503] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0" Oct 31 00:48:27.558692 containerd[1459]: time="2025-10-31T00:48:27.558332932Z" level=info msg="TearDown network for sandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\" successfully" Oct 31 00:48:27.625334 containerd[1459]: time="2025-10-31T00:48:27.625269040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:48:27.625334 containerd[1459]: time="2025-10-31T00:48:27.625353949Z" level=info msg="RemovePodSandbox \"3e2538531b1147fcaeb5e51daf1260061d97c1c054dec48b928901c1215171f0\" returns successfully" Oct 31 00:48:30.572970 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:49224.service - OpenSSH per-connection server daemon (10.0.0.1:49224). Oct 31 00:48:30.612871 sshd[5529]: Accepted publickey for core from 10.0.0.1 port 49224 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:30.614915 sshd[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:30.619171 systemd-logind[1449]: New session 12 of user core. Oct 31 00:48:30.627540 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 00:48:30.786162 sshd[5529]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:30.797697 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:49224.service: Deactivated successfully. Oct 31 00:48:30.799725 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 00:48:30.801451 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Oct 31 00:48:30.807796 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:49240.service - OpenSSH per-connection server daemon (10.0.0.1:49240). Oct 31 00:48:30.809258 systemd-logind[1449]: Removed session 12. Oct 31 00:48:30.842582 sshd[5545]: Accepted publickey for core from 10.0.0.1 port 49240 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:30.844478 sshd[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:30.852562 systemd-logind[1449]: New session 13 of user core. Oct 31 00:48:30.864696 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 00:48:31.148468 sshd[5545]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:31.162357 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:49240.service: Deactivated successfully. Oct 31 00:48:31.164533 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 00:48:31.166392 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Oct 31 00:48:31.171756 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:49250.service - OpenSSH per-connection server daemon (10.0.0.1:49250). Oct 31 00:48:31.174585 systemd-logind[1449]: Removed session 13. Oct 31 00:48:31.210506 sshd[5557]: Accepted publickey for core from 10.0.0.1 port 49250 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:31.212727 sshd[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:31.217531 systemd-logind[1449]: New session 14 of user core. Oct 31 00:48:31.226260 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 00:48:31.590845 sshd[5557]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:31.594865 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:49250.service: Deactivated successfully. Oct 31 00:48:31.597590 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 00:48:31.598368 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Oct 31 00:48:31.599388 systemd-logind[1449]: Removed session 14. Oct 31 00:48:32.558921 containerd[1459]: time="2025-10-31T00:48:32.558734023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:48:33.133547 containerd[1459]: time="2025-10-31T00:48:33.133469881Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:33.173321 containerd[1459]: time="2025-10-31T00:48:33.173160219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:48:33.173535 containerd[1459]: time="2025-10-31T00:48:33.173320769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:48:33.174620 kubelet[2502]: E1031 00:48:33.173726 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:48:33.174620 kubelet[2502]: E1031 00:48:33.173795 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:48:33.174620 kubelet[2502]: E1031 00:48:33.173963 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr8xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cf7fddbf6-nr7tc_calico-apiserver(f0ebaf56-bc9f-4f20-80ce-c5c77074a573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:33.175796 kubelet[2502]: E1031 00:48:33.175730 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" podUID="f0ebaf56-bc9f-4f20-80ce-c5c77074a573" Oct 31 00:48:34.555771 containerd[1459]: time="2025-10-31T00:48:34.555712922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:48:34.996916 containerd[1459]: time="2025-10-31T00:48:34.996847972Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:35.040646 containerd[1459]: time="2025-10-31T00:48:35.040564877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:48:35.040646 containerd[1459]: time="2025-10-31T00:48:35.040596958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:48:35.040932 kubelet[2502]: E1031 00:48:35.040879 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:48:35.041342 kubelet[2502]: E1031 00:48:35.040943 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:48:35.041342 kubelet[2502]: E1031 00:48:35.041116 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6lkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-47d96_calico-system(a6a2171a-de8b-4154-86b8-cb6aefca8e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:35.042312 kubelet[2502]: E1031 00:48:35.042247 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47d96" podUID="a6a2171a-de8b-4154-86b8-cb6aefca8e5b" Oct 31 00:48:35.555504 containerd[1459]: time="2025-10-31T00:48:35.555461834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:48:35.959108 containerd[1459]: time="2025-10-31T00:48:35.959028758Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:36.099894 containerd[1459]: time="2025-10-31T00:48:36.099805500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:48:36.100167 containerd[1459]: time="2025-10-31T00:48:36.099921363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:48:36.100194 kubelet[2502]: E1031 00:48:36.100071 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:48:36.100194 kubelet[2502]: E1031 00:48:36.100136 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:48:36.100588 kubelet[2502]: E1031 00:48:36.100294 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12242dfb77014928886896da969d1ea0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jsjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d455ff89f-sljxb_calico-system(bc7de0b5-fad9-4849-950f-64958f0873ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:36.102340 containerd[1459]: time="2025-10-31T00:48:36.102304432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:48:36.511868 containerd[1459]: time="2025-10-31T00:48:36.511803628Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:36.604582 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:49268.service - OpenSSH per-connection server daemon (10.0.0.1:49268). Oct 31 00:48:36.641084 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 49268 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:36.643100 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:36.647442 systemd-logind[1449]: New session 15 of user core. Oct 31 00:48:36.655564 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 00:48:36.717032 containerd[1459]: time="2025-10-31T00:48:36.716964110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:48:36.717190 containerd[1459]: time="2025-10-31T00:48:36.716990861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:48:36.717311 kubelet[2502]: E1031 00:48:36.717259 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:48:36.717387 kubelet[2502]: E1031 00:48:36.717327 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:48:36.717613 kubelet[2502]: E1031 00:48:36.717567 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d455ff89f-sljxb_calico-system(bc7de0b5-fad9-4849-950f-64958f0873ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:36.717947 containerd[1459]: time="2025-10-31T00:48:36.717910435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:48:36.718758 kubelet[2502]: E1031 00:48:36.718731 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d455ff89f-sljxb" podUID="bc7de0b5-fad9-4849-950f-64958f0873ad" Oct 31 00:48:36.830052 sshd[5575]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:36.833801 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:49268.service: Deactivated successfully. Oct 31 00:48:36.835978 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 00:48:36.836840 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Oct 31 00:48:36.837999 systemd-logind[1449]: Removed session 15. Oct 31 00:48:37.148020 containerd[1459]: time="2025-10-31T00:48:37.147955852Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:37.280951 containerd[1459]: time="2025-10-31T00:48:37.280881575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:48:37.281179 containerd[1459]: time="2025-10-31T00:48:37.280988290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:48:37.281227 kubelet[2502]: E1031 00:48:37.281156 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:48:37.281227 kubelet[2502]: E1031 00:48:37.281207 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:48:37.281886 kubelet[2502]: E1031 00:48:37.281345 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ds9pg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78f5ccdb8f-sfj2g_calico-system(7d011812-0c54-49d2-a84d-25c0746a58a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:37.282643 kubelet[2502]: E1031 00:48:37.282534 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" podUID="7d011812-0c54-49d2-a84d-25c0746a58a0" Oct 31 00:48:37.555197 containerd[1459]: time="2025-10-31T00:48:37.554938512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:48:37.941167 containerd[1459]: time="2025-10-31T00:48:37.941003337Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:37.980065 containerd[1459]: time="2025-10-31T00:48:37.979955696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:48:37.980220 containerd[1459]: time="2025-10-31T00:48:37.980083854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:48:37.980347 kubelet[2502]: E1031 00:48:37.980268 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:48:37.980477 kubelet[2502]: E1031 00:48:37.980355 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:48:37.980619 kubelet[2502]: E1031 00:48:37.980558 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzh5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cznnv_calico-system(d615dcdd-9217-4b99-9985-812be6d75b53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:37.982853 containerd[1459]: time="2025-10-31T00:48:37.982806684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:48:38.300721 containerd[1459]: time="2025-10-31T00:48:38.300542034Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:38.307090 containerd[1459]: time="2025-10-31T00:48:38.307020367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:48:38.307173 containerd[1459]: time="2025-10-31T00:48:38.307096213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:48:38.307312 kubelet[2502]: E1031 00:48:38.307249 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:48:38.307713 kubelet[2502]: E1031 00:48:38.307314 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:48:38.307713 kubelet[2502]: E1031 00:48:38.307525 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzh5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cznnv_calico-system(d615dcdd-9217-4b99-9985-812be6d75b53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:38.308783 kubelet[2502]: E1031 00:48:38.308740 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:48:39.555655 containerd[1459]: time="2025-10-31T00:48:39.555603542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:48:40.125224 containerd[1459]: time="2025-10-31T00:48:40.125150087Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:40.229390 containerd[1459]: time="2025-10-31T00:48:40.229287335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:48:40.229390 containerd[1459]: time="2025-10-31T00:48:40.229338593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:48:40.229732 kubelet[2502]: E1031 00:48:40.229684 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:48:40.230067 kubelet[2502]: E1031 00:48:40.229745 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:48:40.230067 kubelet[2502]: E1031 00:48:40.229897 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ql48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cf7fddbf6-qfkg6_calico-apiserver(4a3f6669-a62a-42ec-9a82-372bbb7049fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:40.231130 kubelet[2502]: E1031 00:48:40.231071 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" podUID="4a3f6669-a62a-42ec-9a82-372bbb7049fb" Oct 31 00:48:41.842640 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:40332.service - OpenSSH per-connection server daemon (10.0.0.1:40332). Oct 31 00:48:41.880226 sshd[5595]: Accepted publickey for core from 10.0.0.1 port 40332 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:41.882050 sshd[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:41.886737 systemd-logind[1449]: New session 16 of user core. Oct 31 00:48:41.897627 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 00:48:42.048839 sshd[5595]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:42.052641 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:40332.service: Deactivated successfully. Oct 31 00:48:42.054949 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 00:48:42.055740 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Oct 31 00:48:42.056701 systemd-logind[1449]: Removed session 16. Oct 31 00:48:42.553887 kubelet[2502]: E1031 00:48:42.553852 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:43.554614 kubelet[2502]: E1031 00:48:43.554571 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:47.064111 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:40352.service - OpenSSH per-connection server daemon (10.0.0.1:40352). Oct 31 00:48:47.113096 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 40352 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:47.115005 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:47.119467 systemd-logind[1449]: New session 17 of user core. Oct 31 00:48:47.125517 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 00:48:47.233586 sshd[5639]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:47.237513 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:40352.service: Deactivated successfully. Oct 31 00:48:47.239612 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 00:48:47.240212 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Oct 31 00:48:47.241087 systemd-logind[1449]: Removed session 17. Oct 31 00:48:47.555433 kubelet[2502]: E1031 00:48:47.555347 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d455ff89f-sljxb" podUID="bc7de0b5-fad9-4849-950f-64958f0873ad" Oct 31 00:48:48.555509 kubelet[2502]: E1031 00:48:48.555067 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" podUID="f0ebaf56-bc9f-4f20-80ce-c5c77074a573" Oct 31 00:48:48.555509 kubelet[2502]: E1031 00:48:48.555166 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47d96" podUID="a6a2171a-de8b-4154-86b8-cb6aefca8e5b" Oct 31 00:48:50.557118 kubelet[2502]: E1031 00:48:50.557054 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:48:51.555469 kubelet[2502]: E1031 00:48:51.555317 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" podUID="7d011812-0c54-49d2-a84d-25c0746a58a0" Oct 31 00:48:52.247378 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:51078.service - OpenSSH per-connection server daemon (10.0.0.1:51078). Oct 31 00:48:52.298038 sshd[5656]: Accepted publickey for core from 10.0.0.1 port 51078 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:52.300493 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:52.305831 systemd-logind[1449]: New session 18 of user core. Oct 31 00:48:52.319885 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 00:48:52.456046 sshd[5656]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:52.462274 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:51078.service: Deactivated successfully. Oct 31 00:48:52.465221 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 00:48:52.466237 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Oct 31 00:48:52.467531 systemd-logind[1449]: Removed session 18. Oct 31 00:48:53.553905 kubelet[2502]: E1031 00:48:53.553845 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:53.555068 kubelet[2502]: E1031 00:48:53.554958 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" podUID="4a3f6669-a62a-42ec-9a82-372bbb7049fb" Oct 31 00:48:55.554149 kubelet[2502]: E1031 00:48:55.554080 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:48:57.469987 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:51120.service - OpenSSH per-connection server daemon (10.0.0.1:51120). Oct 31 00:48:57.511320 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 51120 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:57.512943 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:57.517329 systemd-logind[1449]: New session 19 of user core. Oct 31 00:48:57.524530 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 00:48:57.653995 sshd[5672]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:57.671883 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:51120.service: Deactivated successfully. Oct 31 00:48:57.674585 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 00:48:57.676706 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Oct 31 00:48:57.685770 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:51130.service - OpenSSH per-connection server daemon (10.0.0.1:51130). Oct 31 00:48:57.687097 systemd-logind[1449]: Removed session 19. Oct 31 00:48:57.722125 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 51130 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:57.724248 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:57.728507 systemd-logind[1449]: New session 20 of user core. Oct 31 00:48:57.737528 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 00:48:58.082373 sshd[5686]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:58.091646 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:51130.service: Deactivated successfully. Oct 31 00:48:58.093757 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 00:48:58.095340 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Oct 31 00:48:58.101942 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:51142.service - OpenSSH per-connection server daemon (10.0.0.1:51142). Oct 31 00:48:58.103308 systemd-logind[1449]: Removed session 20. Oct 31 00:48:58.140124 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 51142 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:58.141888 sshd[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:58.146524 systemd-logind[1449]: New session 21 of user core. Oct 31 00:48:58.155589 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 00:48:58.555575 containerd[1459]: time="2025-10-31T00:48:58.555535636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:48:58.693449 sshd[5698]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:58.704220 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:51142.service: Deactivated successfully. Oct 31 00:48:58.710251 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 00:48:58.712010 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Oct 31 00:48:58.724823 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:51158.service - OpenSSH per-connection server daemon (10.0.0.1:51158). Oct 31 00:48:58.726367 systemd-logind[1449]: Removed session 21. Oct 31 00:48:58.761713 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 51158 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:58.763302 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:58.767359 systemd-logind[1449]: New session 22 of user core. Oct 31 00:48:58.775535 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 00:48:58.945914 containerd[1459]: time="2025-10-31T00:48:58.945846091Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:58.948149 containerd[1459]: time="2025-10-31T00:48:58.948114955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:48:58.948284 containerd[1459]: time="2025-10-31T00:48:58.948215577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:48:58.948441 kubelet[2502]: E1031 00:48:58.948375 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:48:58.948824 kubelet[2502]: E1031 00:48:58.948462 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:48:58.948824 kubelet[2502]: E1031 00:48:58.948672 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:12242dfb77014928886896da969d1ea0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jsjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d455ff89f-sljxb_calico-system(bc7de0b5-fad9-4849-950f-64958f0873ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:58.951278 containerd[1459]: time="2025-10-31T00:48:58.951063977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:48:59.003579 sshd[5725]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:59.015026 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:51158.service: Deactivated successfully. Oct 31 00:48:59.017237 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 00:48:59.019648 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Oct 31 00:48:59.027880 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:51162.service - OpenSSH per-connection server daemon (10.0.0.1:51162). Oct 31 00:48:59.028860 systemd-logind[1449]: Removed session 22. Oct 31 00:48:59.062916 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 51162 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:48:59.064674 sshd[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:48:59.069300 systemd-logind[1449]: New session 23 of user core. Oct 31 00:48:59.081551 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 00:48:59.210081 sshd[5743]: pam_unix(sshd:session): session closed for user core Oct 31 00:48:59.214555 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:51162.service: Deactivated successfully. Oct 31 00:48:59.216542 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 00:48:59.217190 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Oct 31 00:48:59.218064 systemd-logind[1449]: Removed session 23. Oct 31 00:48:59.333059 containerd[1459]: time="2025-10-31T00:48:59.333007950Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:48:59.409885 containerd[1459]: time="2025-10-31T00:48:59.409793265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:48:59.410085 containerd[1459]: time="2025-10-31T00:48:59.409864601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:48:59.410128 kubelet[2502]: E1031 00:48:59.410057 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:48:59.410128 kubelet[2502]: E1031 00:48:59.410117 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:48:59.410357 kubelet[2502]: E1031 00:48:59.410289 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsjqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d455ff89f-sljxb_calico-system(bc7de0b5-fad9-4849-950f-64958f0873ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:48:59.411513 kubelet[2502]: E1031 00:48:59.411475 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d455ff89f-sljxb" podUID="bc7de0b5-fad9-4849-950f-64958f0873ad" Oct 31 00:49:03.555087 containerd[1459]: time="2025-10-31T00:49:03.554997976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:49:03.889577 containerd[1459]: time="2025-10-31T00:49:03.889506800Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:49:03.996180 containerd[1459]: time="2025-10-31T00:49:03.996061875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:49:03.996367 containerd[1459]: time="2025-10-31T00:49:03.996107241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:49:03.996514 kubelet[2502]: E1031 00:49:03.996449 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:49:03.996514 kubelet[2502]: E1031 00:49:03.996514 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:49:03.996980 kubelet[2502]: E1031 00:49:03.996820 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzh5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cznnv_calico-system(d615dcdd-9217-4b99-9985-812be6d75b53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:49:03.997100 containerd[1459]: time="2025-10-31T00:49:03.996845145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:49:04.222943 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:41742.service - OpenSSH per-connection server daemon (10.0.0.1:41742). Oct 31 00:49:04.261948 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 41742 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:49:04.263940 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:49:04.268591 systemd-logind[1449]: New session 24 of user core. Oct 31 00:49:04.276573 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 00:49:04.389096 containerd[1459]: time="2025-10-31T00:49:04.389028724Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:49:04.403088 sshd[5761]: pam_unix(sshd:session): session closed for user core Oct 31 00:49:04.409028 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:41742.service: Deactivated successfully. Oct 31 00:49:04.411267 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 00:49:04.411895 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Oct 31 00:49:04.412924 systemd-logind[1449]: Removed session 24. Oct 31 00:49:04.452829 containerd[1459]: time="2025-10-31T00:49:04.452749325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:49:04.452979 containerd[1459]: time="2025-10-31T00:49:04.452817915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:49:04.453216 kubelet[2502]: E1031 00:49:04.453158 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:49:04.453267 kubelet[2502]: E1031 00:49:04.453232 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:49:04.453598 kubelet[2502]: E1031 00:49:04.453529 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr8xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cf7fddbf6-nr7tc_calico-apiserver(f0ebaf56-bc9f-4f20-80ce-c5c77074a573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:49:04.453719 containerd[1459]: time="2025-10-31T00:49:04.453686748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:49:04.455049 kubelet[2502]: E1031 00:49:04.454996 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-nr7tc" podUID="f0ebaf56-bc9f-4f20-80ce-c5c77074a573" Oct 31 00:49:04.824729 containerd[1459]: time="2025-10-31T00:49:04.824653361Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:49:04.825905 containerd[1459]: time="2025-10-31T00:49:04.825843836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:49:04.825964 containerd[1459]: time="2025-10-31T00:49:04.825891226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:49:04.826149 kubelet[2502]: E1031 00:49:04.826090 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:49:04.826218 kubelet[2502]: E1031 00:49:04.826164 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:49:04.826767 kubelet[2502]: E1031 00:49:04.826437 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6lkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-47d96_calico-system(a6a2171a-de8b-4154-86b8-cb6aefca8e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:49:04.826989 containerd[1459]: time="2025-10-31T00:49:04.826543978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:49:04.828349 kubelet[2502]: E1031 00:49:04.828282 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47d96" podUID="a6a2171a-de8b-4154-86b8-cb6aefca8e5b" Oct 31 00:49:05.158948 containerd[1459]: time="2025-10-31T00:49:05.158868483Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:49:05.168237 containerd[1459]: time="2025-10-31T00:49:05.168130272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:49:05.168479 containerd[1459]: time="2025-10-31T00:49:05.168208641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:49:05.168531 kubelet[2502]: E1031 00:49:05.168479 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:49:05.168955 kubelet[2502]: E1031 00:49:05.168544 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:49:05.168955 kubelet[2502]: E1031 00:49:05.168723 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzh5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cznnv_calico-system(d615dcdd-9217-4b99-9985-812be6d75b53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:49:05.169946 kubelet[2502]: E1031 00:49:05.169912 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cznnv" podUID="d615dcdd-9217-4b99-9985-812be6d75b53" Oct 31 00:49:05.555367 containerd[1459]: time="2025-10-31T00:49:05.555204907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:49:05.927173 containerd[1459]: time="2025-10-31T00:49:05.927109280Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:49:05.928560 containerd[1459]: time="2025-10-31T00:49:05.928464798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:49:05.928560 containerd[1459]: time="2025-10-31T00:49:05.928527426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:49:05.928796 kubelet[2502]: E1031 00:49:05.928738 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:49:05.928879 kubelet[2502]: E1031 00:49:05.928799 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:49:05.929051 kubelet[2502]: E1031 00:49:05.928977 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ds9pg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78f5ccdb8f-sfj2g_calico-system(7d011812-0c54-49d2-a84d-25c0746a58a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:49:05.930222 kubelet[2502]: E1031 00:49:05.930179 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f5ccdb8f-sfj2g" podUID="7d011812-0c54-49d2-a84d-25c0746a58a0" Oct 31 00:49:08.555362 containerd[1459]: time="2025-10-31T00:49:08.555311219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:49:08.935930 containerd[1459]: time="2025-10-31T00:49:08.935869714Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:49:08.988641 containerd[1459]: time="2025-10-31T00:49:08.988519106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:49:08.988641 containerd[1459]: time="2025-10-31T00:49:08.988566196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:49:08.988881 kubelet[2502]: E1031 00:49:08.988828 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:49:08.989210 kubelet[2502]: E1031 00:49:08.988887 2502 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:49:08.989210 kubelet[2502]: E1031 00:49:08.989030 2502 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ql48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cf7fddbf6-qfkg6_calico-apiserver(4a3f6669-a62a-42ec-9a82-372bbb7049fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:49:08.990215 kubelet[2502]: E1031 00:49:08.990179 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf7fddbf6-qfkg6" podUID="4a3f6669-a62a-42ec-9a82-372bbb7049fb" Oct 31 00:49:09.414608 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:41802.service - OpenSSH per-connection server daemon (10.0.0.1:41802). Oct 31 00:49:09.471384 sshd[5777]: Accepted publickey for core from 10.0.0.1 port 41802 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:49:09.473023 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:49:09.476683 systemd-logind[1449]: New session 25 of user core. Oct 31 00:49:09.486797 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 00:49:09.624187 sshd[5777]: pam_unix(sshd:session): session closed for user core Oct 31 00:49:09.628523 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:41802.service: Deactivated successfully. Oct 31 00:49:09.630725 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 00:49:09.631413 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Oct 31 00:49:09.632521 systemd-logind[1449]: Removed session 25. Oct 31 00:49:12.556195 kubelet[2502]: E1031 00:49:12.556117 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d455ff89f-sljxb" podUID="bc7de0b5-fad9-4849-950f-64958f0873ad" Oct 31 00:49:14.636670 systemd[1]: Started sshd@25-10.0.0.137:22-10.0.0.1:35156.service - OpenSSH per-connection server daemon (10.0.0.1:35156). Oct 31 00:49:14.673554 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 35156 ssh2: RSA SHA256:cVXqL/AcZ9wouFvGoeGKDlBlR+czTkkJFN8I4b76Y5g Oct 31 00:49:14.675172 sshd[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:49:14.679123 systemd-logind[1449]: New session 26 of user core. Oct 31 00:49:14.689566 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 31 00:49:14.795381 sshd[5792]: pam_unix(sshd:session): session closed for user core Oct 31 00:49:14.799589 systemd[1]: sshd@25-10.0.0.137:22-10.0.0.1:35156.service: Deactivated successfully. Oct 31 00:49:14.801996 systemd[1]: session-26.scope: Deactivated successfully. Oct 31 00:49:14.802714 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Oct 31 00:49:14.803716 systemd-logind[1449]: Removed session 26. Oct 31 00:49:15.173901 systemd[1]: run-containerd-runc-k8s.io-b50136bd7992e91fbfe35ff87b9d89bf746c2d9ed1062f791a70dbac3db88e40-runc.7EehOl.mount: Deactivated successfully. Oct 31 00:49:15.264372 kubelet[2502]: E1031 00:49:15.264332 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"